Sample records for resolving deconvolution ambiguity

  1. A Model Based Deconvolution Approach for Creating Surface Composition Maps of Irregularly Shaped Bodies from Limited Orbiting Nuclear Spectrometer Measurements

    NASA Astrophysics Data System (ADS)

    Dallmann, N. A.; Carlsten, B. E.; Stonehill, L. C.

    2017-12-01

    Orbiting nuclear spectrometers have contributed significantly to our understanding of the composition of solar system bodies. Gamma rays and neutrons are produced within the surfaces of bodies by impacting galactic cosmic rays (GCR) and by intrinsic radionuclide decay. Measuring the flux and energy spectrum of these products at one point in an orbit elucidates the elemental content of the area in view. Deconvolution of measurements from many spatially registered orbit points can produce detailed maps of elemental abundances. In applying these well-established techniques to small and irregularly shaped bodies like Phobos, one encounters unique challenges beyond those of a large spheroid. Polar mapping orbits are not possible for Phobos and quasistatic orbits will realize only modest inclinations unavoidably limiting surface coverage and creating North-South ambiguities in deconvolution. The irregular shape causes self-shadowing both of the body to the spectrometer but also of the body to the incoming GCR. The view angle to the surface normal as well as the distance between the surface and the spectrometer is highly irregular. These characteristics can be synthesized into a complicated and continuously changing measurement system point spread function. We have begun to explore different model-based, statistically rigorous, iterative deconvolution methods to produce elemental abundance maps for a proposed future investigation of Phobos. By incorporating the satellite orbit, the existing high accuracy shape-models of Phobos, and the spectrometer response function, a detailed and accurate system model can be constructed. Many aspects of this model formation are particularly well suited to modern graphics processing techniques and parallel processing. We will present the current status and preliminary visualizations of the Phobos measurement system model. We will also discuss different deconvolution strategies and their relative merit in statistical rigor, stability, achievable resolution, and exploitation of the irregular shape to partially resolve ambiguities. The general applicability of these new approaches to existing data sets from Mars, Mercury, and Lunar investigations will be noted.

  2. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  3. Resolving the Azimuthal Ambiguity in Vector Magnetogram Data with the Divergence-Free Condition: Theoretical Examination

    NASA Technical Reports Server (NTRS)

    Crouch, A.; Barnes, G.

    2008-01-01

    We demonstrate that the azimuthal ambiguity that is present in solar vector magnetogram data can be resolved with line-of-sight and horizontal heliographic derivative information by using the divergence-free property of magnetic fields without additional assumptions. We discuss the specific derivative information that is sufficient to resolve the ambiguity away from disk center, with particular emphasis on the line-of-sight derivative of the various components of the magnetic field. Conversely, we also show cases where ambiguity resolution fails because sufficient line-of-sight derivative information is not available. For example, knowledge of only the line-of-sight derivative of the line-of-sight component of the field is not sufficient to resolve the ambiguity away from disk center.

  4. Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Michailovich, O.; Rathi, Y.

    2014-03-01

    High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.

  5. Ambiguities and conventions in the perception of visual art.

    PubMed

    Mamassian, Pascal

    2008-09-01

    Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.

  6. The Kindergarten Path Effect Revisited: Children's Use of Context in Processing Structural Ambiguities

    ERIC Educational Resources Information Center

    Weighall, Anna R.

    2008-01-01

    Research with adults has shown that ambiguous spoken sentences are resolved efficiently, exploiting multiple cues--including referential context--to select the intended meaning. Paradoxically, children appear to be insensitive to referential cues when resolving ambiguous sentences, relying instead on statistical properties intrinsic to the…

  7. Phase-ambiguity resolution for QPSK modulation systems. Part 2: A method to resolve offset QPSK

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    Part 2 presents a new method to resolve the phase-ambiguity for Offset QPSK modulation systems. When an Offset Quaternary Phase-Shift-Keyed (OQPSK) communications link is utilized, the phase ambiguity of the reference carrier must be resolved. At the transmitter, two different unique words are separately modulated onto the quadrature carriers. At the receiver, the recovered carrier may have one of four possible phases, 0, 90, 180, or 270 degrees, referenced to the nominally correct phase. The IF portion of the channel may cause a phase-sense reversal, i.e., a reversal in the direction of phase rotation for a specified bit pattern. Hence, eight possible phase relationships (the so-called eight ambiguous phase conditions) between input and output of the demodulator must be resolved. Using the In-phase (I)/Quadrature (Q) channel reversal correcting property of an OQPSK Costas loop with integrated symbol synchronization, four ambiguous phase conditions are eliminated. Thus, only four possible ambiguous phase conditions remain. The errors caused by the remaining ambiguous phase conditions can be corrected by monitoring and detecting the polarity of the two unique words. The correction of the unique word polarities results in the complete phase-ambiguity resolution for the OQPSK system.

  8. Application of the Lucy–Richardson Deconvolution Procedure to High Resolution Photoemission Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rameau, J.; Yang, H.-B.; Johnson, P.D.

    2010-07-01

    Angle-resolved photoemission has developed into one of the leading probes of the electronic structure and associated dynamics of condensed matter systems. As with any experimental technique the ability to resolve features in the spectra is ultimately limited by the resolution of the instrumentation used in the measurement. Previously developed for sharpening astronomical images, the Lucy-Richardson deconvolution technique proves to be a useful tool for improving the photoemission spectra obtained in modern hemispherical electron spectrometers where the photoelectron spectrum is displayed as a 2D image in energy and momentum space.

  9. Single-Ion Deconvolution of Mass Peak Overlaps for Atom Probe Microscopy.

    PubMed

    London, Andrew J; Haley, Daniel; Moody, Michael P

    2017-04-01

    Due to the intrinsic evaporation properties of the material studied, insufficient mass-resolving power and lack of knowledge of the kinetic energy of incident ions, peaks in the atom probe mass-to-charge spectrum can overlap and result in incorrect composition measurements. Contributions to these peak overlaps can be deconvoluted globally, by simply examining adjacent peaks combined with knowledge of natural isotopic abundances. However, this strategy does not account for the fact that the relative contributions to this convoluted signal can often vary significantly in different regions of the analysis volume; e.g., across interfaces and within clusters. Some progress has been made with spatially localized deconvolution in cases where the discrete microstructural regions can be easily identified within the reconstruction, but this means no further point cloud analyses are possible. Hence, we present an ion-by-ion methodology where the identity of each ion, normally obscured by peak overlap, is resolved by examining the isotopic abundance of their immediate surroundings. The resulting peak-deconvoluted data are a point cloud and can be analyzed with any existing tools. We present two detailed case studies and discussion of the limitations of this new technique.

  10. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm.

    PubMed

    Liu, Haiguang; Spence, John C H

    2014-11-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.

  11. Ambiguous taxa: Effects on the characterization and interpretation of invertebrate assemblages

    USGS Publications Warehouse

    Cuffney, T.F.; Bilger, Michael D.; Haigler, A.M.

    2007-01-01

    Damaged and immature specimens often result in macroinvertebrate data that contain ambiguous parent-child pairs (i.e., abundances associated with multiple related levels of the taxonomic hierarchy such as Baetis pluto and the associated ambiguous parent Baetis sp.). The choice of method used to resolve ambiguous parent-child pairs may have a very large effect on the characterization of invertebrate assemblages and the interpretation of responses to environmental change because very large proportions of taxa richness (73-78%) and abundance (79-91%) can be associated with ambiguous parents. To address this issue, we examined 16 variations of 4 basic methods for resolving ambiguous taxa: RPKC (remove parent, keep child), MCWP (merge child with parent), RPMC (remove parent or merge child with parent depending on their abundances), and DPAC (distribute parents among children). The choice of method strongly affected assemblage structure, assemblage characteristics (e.g., metrics), and the ability to detect responses along environmental (urbanization) gradients. All methods except MCWP produced acceptable results when used consistently within a study. However, the assemblage characteristics (e.g., values of assemblage metrics) differed widely depending on the method used, and data should not be combined unless the methods used to resolve ambiguous taxa are well documented and are known to be comparable. The suitability of the methods was evaluated and compared on the basis of 13 criteria that considered conservation of taxa richness and abundance, consistency among samples, methods, and studies, and effects on the interpretation of the data. Methods RPMC and DPAC had the highest suitability scores regardless of whether ambiguous taxa were resolved for each sample separately or for a group of samples. Method MCWP gave consistently poor results. Methods MCWP and DPAC approximate the use of family-level identifications and operational taxonomic units (OTU), respectively. Our results suggest that restricting identifications to the family level is not a good method of resolving ambiguous taxa, whereas generating OTUs works well provided that documentation issues are addressed. ?? 2007 by The North American Benthological Society.

  12. Global positioning system network analysis with phase ambiguity resolution applied to crustal deformation studies in California

    NASA Technical Reports Server (NTRS)

    Dong, Da-Nan; Bock, Yehuda

    1989-01-01

    An efficient algorithm is developed for multisession adjustment of GPS data with simultaneous orbit determination and ambiguity resolution. Application of the algorithm to the analysis of data from a five-year campaign in progress in southern and central California to monitor tectonic motions using observations by GPS satellites, demonstrates improvements in estimates of station position and satellite orbits when the phase ambiguities are resolved. Most of the phase ambiguities in the GPS network were resolved, particularly for all the baselines of geophysical interest in California.

  13. LIFG-based attentional control and the resolution of lexical ambiguities in sentence context

    PubMed Central

    Vuong, Loan C.; Martin, Randi C.

    2010-01-01

    The role of attentional control in lexical ambiguity resolution was examined in two patients with damage to the left inferior frontal gyrus (LIFG) and one control patient with non-LIFG damage. Experiment 1 confirmed that the LIFG patients had attentional control deficits compared to normal controls while the non-LIFG patient was relatively unimpaired. Experiment 2 showed that all three patients did as well as normal controls in using biasing sentence context to resolve lexical ambiguities involving balanced ambiguous words, but only the LIFG patients took an abnormally long time on lexical ambiguities that resolved toward a subordinate meaning of biased ambiguous words. Taken together, the results suggest that attentional control plays an important role in the resolution of certain lexical ambiguities – those that induce strong interference from context-inappropriate meanings (i.e., dominant meanings of biased ambiguous words). PMID:20971500

  14. Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition

    PubMed Central

    Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac

    2013-01-01

    Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772

  15. Determination of Patterson group symmetry from sparse multi-crystal data sets in the presence of an indexing ambiguity.

    PubMed

    Gildea, Richard J; Winter, Graeme

    2018-05-01

    Combining X-ray diffraction data from multiple samples requires determination of the symmetry and resolution of any indexing ambiguity. For the partial data sets typical of in situ room-temperature experiments, determination of the correct symmetry is often not straightforward. The potential for indexing ambiguity in polar space groups is also an issue, although methods to resolve this are available if the true symmetry is known. Here, a method is presented to simultaneously resolve the determination of the Patterson symmetry and the indexing ambiguity for partial data sets. open access.

  16. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  17. Supermassive Black Holes with High Accretion Rates in Active Galactic Nuclei. VI. Velocity-resolved Reverberation Mapping of the Hβ Line

    NASA Astrophysics Data System (ADS)

    Du, Pu; Lu, Kai-Xing; Hu, Chen; Qiu, Jie; Li, Yan-Rong; Huang, Ying-Ke; Wang, Fang; Bai, Jin-Ming; Bian, Wei-Hao; Yuan, Ye-Fei; Ho, Luis C.; Wang, Jian-Min; SEAMBH Collaboration

    2016-03-01

    In the sixth of a series of papers reporting on a large reverberation mapping (RM) campaign of active galactic nuclei (AGNs) with high accretion rates, we present velocity-resolved time lags of Hβ emission lines for nine objects observed in the campaign during 2012-2013. In order to correct the line broadening caused by seeing and instruments before analyzing the velocity-resolved RM, we adopt the Richardson-Lucy deconvolution to reconstruct their Hβ profiles. The validity and effectiveness of the deconvolution are checked using Monte Carlo simulation. Five among the nine objects show clear dependence of the time delay on velocity. Mrk 335 and Mrk 486 show signatures of gas inflow whereas the clouds in the broad-line regions (BLRs) of Mrk 142 and MCG +06-26-012 tend to be radial outflowing. Mrk 1044 is consistent with having virialized motions. The lags of the remaining four are not velocity-resolvable. The velocity-resolved RM of super-Eddington accreting massive black holes (SEAMBHs) shows that they have diverse kinematics in their BLRs. Comparing with the AGNs with sub-Eddington accretion rates, we do not find significant differences in the BLR kinematics of SEAMBHs.

  18. Manipulations of Wavefront Propagation: Useful Methods and Applications for Interferometric Measurements and Scanning

    PubMed Central

    Novoselski, Eitan; Yifrach, Ariel; Lanzmann, Emmanuel; Arieli, Yoel

    2017-01-01

    Phase measurements obtained by high-coherence interferometry are restricted by the 2π ambiguity, to height differences smaller than λ/2. A further restriction in most interferometric systems is for focusing the system on the measured object. We present two methods that overcome these restrictions. In the first method, different segments of a measured wavefront are digitally propagated and focused locally after measurement. The divergent distances, by which the diverse segments of the wavefront are propagated in order to achieve a focused image, provide enough information so as to resolve the 2π ambiguity. The second method employs an interferogram obtained by a spectrum constituting a small number of wavelengths. The magnitude of the interferogram's modulations is utilized to resolve the 2π ambiguity. Such methods of wavefront propagation enable several applications such as focusing and resolving the 2π ambiguity, as described in the article. PMID:29109825

  19. Detection of high-risk atherosclerotic lesions by time-resolved fluorescence spectroscopy based on the Laguerre deconvolution technique

    NASA Astrophysics Data System (ADS)

    Jo, J. A.; Fang, Q.; Papaioannou, T.; Qiao, J. H.; Fishbein, M. C.; Beseth, B.; Dorafshar, A. H.; Reil, T.; Baker, D.; Freischlag, J.; Marcu, L.

    2006-02-01

    This study introduces new methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data analysis for tissue characterization. These analytical methods were applied for the detection of atherosclerotic vulnerable plaques. Upon pulsed nitrogen laser (337 nm, 1 ns) excitation, TR-LIFS measurements were obtained from carotid atherosclerotic plaque specimens (57 endarteroctomy patients) at 492 distinct areas. The emission was both spectrally- (360-600 nm range at 5 nm interval) and temporally- (0.3 ns resolution) resolved using a prototype clinically compatible fiber-optic catheter TR-LIFS apparatus. The TR-LIFS measurements were subsequently analyzed using a standard multiexponential deconvolution and a recently introduced Laguerre deconvolution technique. Based on their histopathology, the lesions were classified as early (thin intima), fibrotic (collagen-rich intima), and high-risk (thin cap over necrotic core and/or inflamed intima). Stepwise linear discriminant analysis (SLDA) was applied for lesion classification. Normalized spectral intensity values and Laguerre expansion coefficients (LEC) at discrete emission wavelengths (390, 450, 500 and 550 nm) were used as features for classification. The Laguerre based SLDA classifier provided discrimination of high-risk lesions with high sensitivity (SE>81%) and specificity (SP>95%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for the diagnosis of high-risk vulnerable atherosclerotic plaques.

  20. Blind source deconvolution for deep Earth seismology

    NASA Astrophysics Data System (ADS)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  1. Intrinsic fluorescence spectroscopy of glutamate dehydrogenase: Integrated behavior and deconvolution analysis

    NASA Astrophysics Data System (ADS)

    Pompa, P. P.; Cingolani, R.; Rinaldi, R.

    2003-07-01

    In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.

  2. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less

  3. Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.

  4. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution.

    PubMed

    Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Reduction of Phase Ambiguity in an Offset-QPSK Receiver

    NASA Technical Reports Server (NTRS)

    Berner, Jeff; Kinman, Peter

    2004-01-01

    Proposed modifications of an offset-quadri-phase-shift keying (offset-QPSK) transmitter and receiver would reduce the amount of signal processing that must be done in the receiver to resolve the QPSK fourfold phase ambiguity. Resolution of the phase ambiguity is necessary in order to synchronize, with the received carrier signal, the signal generated by a local oscillator in a carrier-tracking loop in the receiver. Without resolution of the fourfold phase ambiguity, the loop could lock to any of four possible phase points, only one of which has the proper phase relationship with the carrier. The proposal applies, more specifically, to an offset-QPSK receiver that contains a carrier-tracking loop like that shown in Figure 1. This carrier-tracking loop does not resolve or reduce the phase ambiguity. A carrier-tracking loop of a different design optimized for the reception of offset QPSK could reduce the phase ambiguity from fourfold to twofold, but would be more complex. Alternatively, one could resolve the fourfold phase ambiguity by use of differential coding in the transmitter, at a cost of reduced power efficiency. The proposed modifications would make it possible to reduce the fourfold phase ambiguity to twofold, with no loss in power efficiency and only relatively simple additional signal-processing steps in the transmitter and receiver. The twofold phase ambiguity would then be resolved by use of a unique synchronization word, as is commonly done in binary phase-shift keying (BPSK). Although the mathematical and signal-processing principles underlying the modifications are too complex to explain in detail here, the modifications themselves would be relatively simple and are best described with the help of simple block diagrams (see Figure 2). In the transmitter, one would add a unit that would periodically invert bits going into the QPSK modulator; in the receiver, one would add a unit that would effect different but corresponding inversions of bits coming out of the QPSK demodulator. The net effect of all the inversions would be that depending on which lock point the carrier-tracking loop had selected, all the output bits would be either inverted or non-inverted together; hence, the ambiguity would be reduced from fourfold to twofold, as desired.

  6. Doppler centroid estimation ambiguity for synthetic aperture radars

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1989-01-01

    A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.

  7. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    PubMed

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  8. Broadband ion mobility deconvolution for rapid analysis of complex mixtures.

    PubMed

    Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj

    2018-05-04

    High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.

  9. Perirhinal Cortex Resolves Feature Ambiguity in Configural Object Recognition and Perceptual Oddity Tasks

    ERIC Educational Resources Information Center

    Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.

    2007-01-01

    The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…

  10. Children's Use of Gesture to Resolve Lexical Ambiguity

    ERIC Educational Resources Information Center

    Kidd, Evan; Holler, Judith

    2009-01-01

    We report on a study investigating 3-5-year-old children's use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, "bat" (flying mammal) and "bat" (sports equipment). They were then asked to re-tell these stories to a second experimenter. The data were coded for the means…

  11. Saturation-resolved-fluorescence spectroscopy of Cr3+:mullite glass ceramic

    NASA Astrophysics Data System (ADS)

    Liu, Huimin; Knutson, Robert; Yen, W. M.

    1990-01-01

    We present a saturation-based technique designed to isolate and uncouple individual components of inhomogeneously broadened spectra that are simultaneously coupled to each other through spectral overlap and energy-transfer interactions. We have termed the technique saturation-resolved-fluorescence spectroscopy; we demonstrate its usefulness in deconvoluting the complex spectra of Cr3+:mullite glass ceramic.

  12. On resolving the 180 deg ambiguity for a temporal sequence of vector magnetograms

    NASA Astrophysics Data System (ADS)

    Cheung, M. C.

    2008-05-01

    The solar coronal magnetic field evolves in response to the underlying photospheric driving. To study this connection by means of data-driven modeling, an accurate knowledge of the evolution of the photospheric vector field is essential. While there is a large body of work on attempts to resolve the 180 deg ambiguity in the component of the magnetic field transverse to the line of sight, most of these methods are applicable only to individual frames. With the imminent launch of the Solar Dynamics Observatory, it is especially timely for us to develop possible automated methods to resolve the ambiguity for temporal sequences of magnetograms. We present here the temporal acute angle method, which makes use of preceding disambiguated magnetograms as reference solutions for resolving the ambiguity in subsequent frames. To find the strengths and weaknesses of this method, we have carried out tests (1) on idealized magnetogram sequences involving simple rotating, shearing and straining flows and (2) on a synthetic magnetogram sequence from a 3D radiative MHD simulation of an buoyant magnetic flux tube emerging through granular convection. A metric for automatically picking out regions where the method is likely to fail is also presented.

  13. Resolving Number Ambiguities during Language Comprehension

    ERIC Educational Resources Information Center

    Bader, Markus; Haussler, Jana

    2009-01-01

    This paper investigates how readers process number ambiguous noun phrases in subject position. A speeded-grammaticality judgment experiment and two self-paced reading experiments were conducted involving number ambiguous subjects in German verb-end clauses. Number preferences for individual nouns were estimated by means of two questionnaire…

  14. Application of the multiple PRF technique to resolve Doppler centroid estimation ambiguity for spaceborne SAR

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1992-01-01

    Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.

  15. Method of resolving radio phase ambiguity in satellite orbit determination

    NASA Technical Reports Server (NTRS)

    Councelman, Charles C., III; Abbot, Richard I.

    1989-01-01

    For satellite orbit determination, the most accurate observable available today is microwave radio phase, which can be differenced between observing stations and between satellites to cancel both transmitter- and receiver-related errors. For maximum accuracy, the integer cycle ambiguities of the doubly differenced observations must be resolved. To perform this ambiguity resolution, a bootstrapping strategy is proposed. This strategy requires the tracking stations to have a wide ranging progression of spacings. By conventional 'integrated Doppler' processing of the observations from the most widely spaced stations, the orbits are determined well enough to permit resolution of the ambiguities for the most closely spaced stations. The resolution of these ambiguities reduces the uncertainty of the orbit determination enough to enable ambiguity resolution for more widely spaced stations, which further reduces the orbital uncertainty. In a test of this strategy with six tracking stations, both the formal and the true errors of determining Global Positioning System satellite orbits were reduced by a factor of 2.

  16. SUPERMASSIVE BLACK HOLES WITH HIGH ACCRETION RATES IN ACTIVE GALACTIC NUCLEI. VI. VELOCITY-RESOLVED REVERBERATION MAPPING OF THE Hβ LINE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Pu; Lu, Kai-Xing; Hu, Chen

    In the sixth of a series of papers reporting on a large reverberation mapping (RM) campaign of active galactic nuclei (AGNs) with high accretion rates, we present velocity-resolved time lags of Hβ emission lines for nine objects observed in the campaign during 2012–2013. In order to correct the line broadening caused by seeing and instruments before analyzing the velocity-resolved RM, we adopt the Richardson–Lucy deconvolution to reconstruct their Hβ profiles. The validity and effectiveness of the deconvolution are checked using Monte Carlo simulation. Five among the nine objects show clear dependence of the time delay on velocity. Mrk 335 andmore » Mrk 486 show signatures of gas inflow whereas the clouds in the broad-line regions (BLRs) of Mrk 142 and MCG +06-26-012 tend to be radial outflowing. Mrk 1044 is consistent with having virialized motions. The lags of the remaining four are not velocity-resolvable. The velocity-resolved RM of super-Eddington accreting massive black holes (SEAMBHs) shows that they have diverse kinematics in their BLRs. Comparing with the AGNs with sub-Eddington accretion rates, we do not find significant differences in the BLR kinematics of SEAMBHs.« less

  17. Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.

    PubMed

    Chen, Xin; Liu, Zhen; Wei, Xizhang

    2017-05-11

    Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.

  18. Resolving the ambiguities: An industrial hygiene Indoor Air Quality (IAQ) symposium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammage, R.B.

    1995-01-01

    Resolving the Ambiguities: An Industrial Hygiene (IAQ) Symposium was a one-day event designed to inform practicing industrial hygienists about highlight presentations made at Indoor Air `93. A broad range of topics was presented by invited speakers. Topics included were attempts to deal with guidelines and standards, questionnaires, odors and sensory irritation, respiratory allergies, neuroses, sick building syndrome (SBS), and multiple chemical sensitivity (MCS).

  19. Phase ambiguity resolution for offset QPSK modulation systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien M. (Inventor)

    1991-01-01

    A demodulator for Offset Quaternary Phase Shift Keyed (OQPSK) signals modulated with two words resolves eight possible combinations of phase ambiguity which may produce data error by first processing received I(sub R) and Q(sub R) data in an integrated carrier loop/symbol synchronizer using a digital Costas loop with matched filters for correcting four of eight possible phase lock errors, and then the remaining four using a phase ambiguity resolver which detects the words to not only reverse the received I(sub R) and Q(sub R) data channels, but to also invert (complement) the I(sub R) and/or Q(sub R) data, or to at least complement the I(sub R) and Q(sub R) data for systems using nontransparent codes that do not have rotation direction ambiguity.

  20. Evaluation of deconvolution modelling applied to numerical combustion

    NASA Astrophysics Data System (ADS)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  1. Laguerre-based method for analysis of time-resolved fluorescence data: application to in-vivo characterization and diagnosis of atherosclerotic lesions.

    PubMed

    Jo, Javier A; Fang, Qiyin; Papaioannou, Thanassis; Baker, J Dennis; Dorafshar, Amir H; Reil, Todd; Qiao, Jian-Hua; Fishbein, Michael C; Freischlag, Julie A; Marcu, Laura

    2006-01-01

    We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability.

  2. Laguerre-based method for analysis of time-resolved fluorescence data: application to in-vivo characterization and diagnosis of atherosclerotic lesions

    NASA Astrophysics Data System (ADS)

    Jo, Javier A.; Fang, Qiyin; Papaioannou, Thanassis; Baker, J. Dennis; Dorafshar, Amir; Reil, Todd; Qiao, Jianhua; Fishbein, Michael C.; Freischlag, Julie A.; Marcu, Laura

    2006-03-01

    We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability.

  3. Laguerre-based method for analysis of time-resolved fluorescence data: application to in-vivo characterization and diagnosis of atherosclerotic lesions

    PubMed Central

    Jo, Javier A.; Fang, Qiyin; Papaioannou, Thanassis; Baker, J. Dennis; Dorafshar, Amir H.; Reil, Todd; Qiao, Jian-Hua; Fishbein, Michael C.; Freischlag, Julie A.; Marcu, Laura

    2007-01-01

    We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability. PMID:16674179

  4. Resolving the 180-degree ambiguity in vector magnetic field measurements: The 'minimum' energy solution

    NASA Technical Reports Server (NTRS)

    Metcalf, Thomas R.

    1994-01-01

    I present a robust algorithm that resolves the 180-deg ambiguity in measurements of the solar vector magnetic field. The technique simultaneously minimizes both the divergence of the magnetic field and the electric current density using a simulated annealing algorithm. This results in the field orientation with approximately minimum free energy. The technique is well-founded physically and is simple to implement.

  5. Resolving Phase Ambiguities In OQPSK

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien M.

    1991-01-01

    Improved design for modulator and demodulator in offset-quaternary-phase-key-shifting (OQPSK) communication system enables receiver to resolve ambiguity in estimated phase of received signal. Features include unique-code-word modulation and detection and digital implementation of Costas loop in carrier-recovery subsystem. Enchances performance of carrier-recovery subsystem, reduces complexity of receiver by removing redundant circuits from previous design, and eliminates dependence of timing in receiver upon parallel-to-serial-conversion clock.

  6. Neural responses to ambiguity involve domain-general and domain-specific emotion processing systems.

    PubMed

    Neta, Maital; Kelley, William M; Whalen, Paul J

    2013-04-01

    Extant research has examined the process of decision making under uncertainty, specifically in situations of ambiguity. However, much of this work has been conducted in the context of semantic and low-level visual processing. An open question is whether ambiguity in social signals (e.g., emotional facial expressions) is processed similarly or whether a unique set of processors come on-line to resolve ambiguity in a social context. Our work has examined ambiguity using surprised facial expressions, as they have predicted both positive and negative outcomes in the past. Specifically, whereas some people tended to interpret surprise as negatively valenced, others tended toward a more positive interpretation. Here, we examined neural responses to social ambiguity using faces (surprise) and nonface emotional scenes (International Affective Picture System). Moreover, we examined whether these effects are specific to ambiguity resolution (i.e., judgments about the ambiguity) or whether similar effects would be demonstrated for incidental judgments (e.g., nonvalence judgments about ambiguously valenced stimuli). We found that a distinct task control (i.e., cingulo-opercular) network was more active when resolving ambiguity. We also found that activity in the ventral amygdala was greater to faces and scenes that were rated explicitly along the dimension of valence, consistent with findings that the ventral amygdala tracks valence. Taken together, there is a complex neural architecture that supports decision making in the presence of ambiguity: (a) a core set of cortical structures engaged for explicit ambiguity processing across stimulus boundaries and (b) other dedicated circuits for biologically relevant learning situations involving faces.

  7. Rapid Linguistic Ambiguity Resolution in Young Children with Autism Spectrum Disorder: Eye Tracking Evidence for the Limits of Weak Central Coherence.

    PubMed

    Hahn, Noemi; Snedeker, Jesse; Rabagliati, Hugh

    2015-12-01

    Individuals with autism spectrum disorders (ASD) have often been reported to have difficulty integrating information into its broader context, which has motivated the Weak Central Coherence theory of ASD. In the linguistic domain, evidence for this difficulty comes from reports of impaired use of linguistic context to resolve ambiguous words. However, recent work has suggested that impaired use of linguistic context may not be characteristic of ASD, and is instead better explained by co-occurring language impairments. Here, we provide a strong test of these claims, using the visual world eye tracking paradigm to examine the online mechanisms by which children with autism resolve linguistic ambiguity. To address concerns about both language impairments and compensatory strategies, we used a sample whose verbal skills were strong and whose average age (7; 6) was lower than previous work on lexical ambiguity resolution in ASD. Participants (40 with autism and 40 controls) heard sentences with ambiguous words in contexts that either strongly supported one reading or were consistent with both (John fed/saw the bat). We measured activation of the unintended meaning through implicit semantic priming of an associate (looks to a depicted baseball glove). Contrary to the predictions of weak central coherence, children with ASD, like controls, quickly used context to resolve ambiguity, selecting appropriate meanings within a second. We discuss how these results constrain the generality of weak central coherence. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Ambiguity resolving based on cosine property of phase differences for 3D source localization with uniform circular array

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang

    2017-07-01

    Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.

  9. Resolving Ambiguity in Nonmonotonic Inheritance Hierarchies

    DTIC Science & Technology

    1991-08-01

    this case, specificity cannot resolve the ambiguity w.r.t. platypus at mammal, and r 2 supports both of the assertions marked (***): that platypuses ...eoss jo~yer Figure 2: Is a platypus a mammal? Figure 3: A blue whale is an aquatic creature. eater does not defeat either the assertion that Joe is a...derived conclusion that platypuses are mammals is directly opposed by the (equally legitimate) conclusion that platypuses are not mammals. In this case

  10. Comment on ‘A novel method for fast and robust estimation of fluorescence decay dynamics using constrained least-square deconvolution with Laguerre expansion’

    NASA Astrophysics Data System (ADS)

    Zhang, Yongliang; Day-Uei Li, David

    2017-02-01

    This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.

  11. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.

  12. Top-Down Influence in Young Children's Linguistic Ambiguity Resolution

    ERIC Educational Resources Information Center

    Rabagliati, Hugh; Pylkkanen, Liina; Marcus, Gary F.

    2013-01-01

    Language is rife with ambiguity. Do children and adults meet this challenge in similar ways? Recent work suggests that while adults resolve syntactic ambiguities by integrating a variety of cues, children are less sensitive to top-down evidence. We test whether this top-down insensitivity is specific to syntax or a general feature of children's…

  13. Phase-ambiguity resolution for QPSK modulation systems. Part 1: A review

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    Part 1 reviews the current phase-ambiguity resolution techniques for QPSK coherent modulation systems. Here, those known and published methods of resolving phase ambiguity for QPSK with and without Forward-Error-Correcting (FEC) are discussed. The necessary background is provided for a complete understanding of the second part where a new technique will be discussed. An appropriate technique to the Consultative Committee for Space Data Systems (CCSDS) is recommended for consideration in future standards on phase-ambiguity resolution for QPSK coherent modulation systems.

  14. Ambiguity resolution in systems using Omega for position location

    NASA Technical Reports Server (NTRS)

    Frenkel, G.; Gan, D. G.

    1974-01-01

    The lane ambiguity problem prevents the utilization of the Omega system for many applications such as locating buoys and balloons. The method of multiple lines of position introduced herein uses signals from four or more Omega stations for ambiguity resolution. The coordinates of the candidate points are determined first through the use of the Newton iterative procedure. Subsequently, a likelihood function is generated for each point, and the ambiguity is resolved by selecting the most likely point. The method was tested through simulation.

  15. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  16. Syntactic Computations in the Language Network: Characterizing Dynamic Network Properties Using Representational Similarity Analysis

    PubMed Central

    Tyler, Lorraine K.; Cheung, Teresa P. L.; Devereux, Barry J.; Clarke, Alex

    2013-01-01

    The core human capacity of syntactic analysis involves a left hemisphere network involving left inferior frontal gyrus (LIFG) and posterior middle temporal gyrus (LMTG) and the anatomical connections between them. Here we use magnetoencephalography (MEG) to determine the spatio-temporal properties of syntactic computations in this network. Listeners heard spoken sentences containing a local syntactic ambiguity (e.g., “… landing planes …”), at the offset of which they heard a disambiguating verb and decided whether it was an acceptable/unacceptable continuation of the sentence. We charted the time-course of processing and resolving syntactic ambiguity by measuring MEG responses from the onset of each word in the ambiguous phrase and the disambiguating word. We used representational similarity analysis (RSA) to characterize syntactic information represented in the LIFG and left posterior middle temporal gyrus (LpMTG) over time and to investigate their relationship to each other. Testing a variety of lexico-syntactic and ambiguity models against the MEG data, our results suggest early lexico-syntactic responses in the LpMTG and later effects of ambiguity in the LIFG, pointing to a clear differentiation in the functional roles of these two regions. Our results suggest the LpMTG represents and transmits lexical information to the LIFG, which responds to and resolves the ambiguity. PMID:23730293

  17. Sounding of the Ion Energization Region: Resolving Ambiguities

    NASA Technical Reports Server (NTRS)

    LaBelle, James

    2003-01-01

    Dartmouth College provided a single-channel high-frequency wave receiver to the Sounding of the Ion Energization Region: Resolving Ambiguities (SIERRA) rocket experiment launched from Poker Flat, Alaska, in January 2002. The receiver used signals from booms, probes, preamplifiers, and differential amplifiers provided by Cornell University coinvestigators. Output was to a dedicated 5 MHz telemetry link provided by WFF, with a small amount of additional Pulse Code Modulation (PCM) telemetry required for the receiver gain information. We also performed preliminary analysis of the data. The work completed is outlined below, in chronological order.

  18. Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2016-12-01

    Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.

  19. Application of the laguerre deconvolution method for time-resolved fluorescence spectroscopy to the characterization of atherosclerotic plaques.

    PubMed

    Jo, J A; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Beseth, B; Dorafshar, A H; Reil, T; Baker, D; Freischlag, J; Marcu, L

    2005-01-01

    This study investigates the ability of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) to detect inflammation in atherosclerotic lesion, a key feature of plaque vulnerability. A total of 348 TR-LIFS measurements were taken from carotid plaques of 30 patients, and subsequently analyzed using the Laguerre deconvolution technique. The investigated spots were classified as Early, Fibrotic/Calcified or Inflamed lesions. A stepwise linear discriminant analysis algorithm was developed using spectral and TR features (normalized intensity values and Laguerre expansion coefficients at discrete emission wavelengths, respectively). Features from only three emission wavelengths (390, 450 and 500 nm) were used in the classifier. The Inflamed lesions were discriminated with sensitivity > 80% and specificity > 90 %, when the Laguerre expansion coefficients were included in the feature space. These results indicate that TR-LIFS information derived from the Laguerre expansion coefficients at few selected emission wavelengths can discriminate inflammation in atherosclerotic plaques. We believe that TR-LIFS derived Laguerre expansion coefficients can provide a valuable additional dimension for the detection of vulnerable plaques.

  20. Effect of Prefrontal Cortex Damage on Resolving Lexical Ambiguity in Text

    ERIC Educational Resources Information Center

    Frattali, Carol; Hanna, Rebecca; McGinty, Anita Shukla; Gerber, Lynn; Wesley, Robert; Grafman, Jordan; Coelho, Carl

    2007-01-01

    The function of suppression of context-inappropriate meanings during lexical ambiguity resolution was examined in 25 adults with prefrontal cortex damage (PFCD) localized to the left (N = 8), right (N = 6), or bilaterally (N = 11); and 21 matched Controls. Results revealed unexpected inverse patterns of suppression between PFCD and Control groups,…

  1. Use of Yohkoh SXT in Measuring the Net Current and CME Productivity of Active Regions

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.; Six, N. Frank (Technical Monitor)

    2001-01-01

    In our investigation of the correlation of global nonpotentiality of active regions to their CME productivity (Falconer, D.A. 2001, JGR, in press, and Falconer, Moore, & Gary, 2000, EOS 82, 20 S323), we use Yohkoh SXT images for two purposes. The first use is to help resolve the 180 degree ambiguity in the direction of the observed transverse magnetic field. Resolution of the 180 degree ambiguity is important, since the net current, one of our measures of global nonpotentiality, is derived from integrating the dot product of the transverse field around a contour (I(sub N)=(integral)BT(raised dot)dl). The ambiguity results from the observed transverse field being determined from the linear polarization, which gives the plane of the direction, but leaves a 180 degrees ambiguity. Automated methods to resolve the ambiguity ranging from the simple acute angle rule (Falconer, D.A. 2001) to the more sophisticated annealing method (Metcalf T.R. 1994). For many active regions, especially ones that are nearly potential these methods work well. But for very nonpotential active regions where the shear angle (the angle between the observed and potential transverse field) is near 90 degrees throughout large swaths along the main neutral line, both methods can resolve the ambiguity incorrectly for long segments of the neutral line. By determining from coronal images, such as those from Yohkoh/SXT, the sense of shear along the main neutral line in the active region, these cases can be identified and corrected by a modification of the acute angle rule described here. The second use of Yohkoh/SXT in this study is to check for the cusped coronal arcades of long-duration eruptive flares. This signature is an excellent proxy for CMEs, and was used by Canfield, Hudson, and McKenzie (1999 GRL V26, 6, 627-630). This work is funded by NSF through the Space Weather Program and by NASA through the Solar Physics Supporting Research and Technology Program.

  2. Array-based satellite phase bias sensing: theory and GPS/BeiDou/QZSS results

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2014-09-01

    Single-receiver integer ambiguity resolution (IAR) is a measurement concept that makes use of network-derived non-integer satellite phase biases (SPBs), among other corrections, to recover and resolve the integer ambiguities of the carrier-phase data of a single GNSS receiver. If it is realized, the very precise integer ambiguity-resolved carrier-phase data would then contribute to the estimation of the receiver’s position, thus making (near) real-time precise point positioning feasible. Proper definition and determination of the SPBs take a leading part in developing the idea of single-receiver IAR. In this contribution, the concept of array-based between-satellite single-differenced (SD) SPB determination is introduced, which is aimed to reduce the code-dominated precision of the SD-SPB corrections. The underlying model is realized by giving the role of the local reference network to an array of antennas, mounted on rigid platforms, that are separated by short distances so that the same ionospheric delay is assumed to be experienced by all the antennas. To that end, a closed-form expression of the array-aided SD-SPB corrections is presented, thereby proposing a simple strategy to compute the SD-SPBs. After resolving double-differenced ambiguities of the array’s data, the variance of the SD-SPB corrections is shown to be reduced by a factor equal to the number of antennas. This improvement in precision is also affirmed by numerical results of the three GNSSs GPS, BeiDou and QZSS. Experimental results demonstrate that the integer-recovered ambiguities converge to integers faster, upon increasing the number of antennas aiding the SD-SPB corrections.

  3. Accounting for Regressive Eye-Movements in Models of Sentence Processing: A Reappraisal of the Selective Reanalysis Hypothesis

    ERIC Educational Resources Information Center

    Mitchell, Don C.; Shen, Xingjia; Green, Matthew J.; Hodgson, Timothy L.

    2008-01-01

    When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective…

  4. Monolingual and Bilingual Preschoolers' Use of Gestures to Interpret Ambiguous Pronouns

    ERIC Educational Resources Information Center

    Yow, W. Quin

    2015-01-01

    Young children typically do not use order-of-mention to resolve ambiguous pronouns, but may do so if given additional cues, such as gestures. Additionally, this ability to utilize gestures may be enhanced in bilingual children, who may be more sensitive to such cues due to their unique language experience. We asked monolingual and bilingual…

  5. Evaluation of ambiguous associations in the amygdala by learning the structure of the environment

    PubMed Central

    Madarasz, Tamas J.; Diaz-Mataix, Lorenzo; Akhand, Omar; Ycu, Edgar A.; LeDoux, Joseph E.; Johansen, Joshua P.

    2017-01-01

    Recognizing predictive relationships is critical for survival, but an understanding of the underlying neural mechanisms remains elusive. In particular it is unclear how the brain distinguishes predictive relationships from spurious ones when evidence about a relationship is ambiguous, or how it computes predictions given such uncertainty. To better understand this process we introduced ambiguity into an associative learning task by presenting aversive outcomes both in the presence and absence of a predictive cue. Electrophysiological and optogenetic approaches revealed that amygdala neurons directly regulate and track the effects of ambiguity on learning. Contrary to established accounts of associative learning however, interference from competing associations was not required to assess an ambiguous cue-outcome contingency. Instead, animals’ behavior was explained by a normative account that evaluates different models of the environment’s statistical structure. These findings suggest an alternative view on the role of amygdala circuits in resolving ambiguity during aversive learning. PMID:27214568

  6. Evaluation of ambiguous associations in the amygdala by learning the structure of the environment.

    PubMed

    Madarasz, Tamas J; Diaz-Mataix, Lorenzo; Akhand, Omar; Ycu, Edgar A; LeDoux, Joseph E; Johansen, Joshua P

    2016-07-01

    Recognizing predictive relationships is critical for survival, but an understanding of the underlying neural mechanisms remains elusive. In particular, it is unclear how the brain distinguishes predictive relationships from spurious ones when evidence about a relationship is ambiguous, or how it computes predictions given such uncertainty. To better understand this process, we introduced ambiguity into an associative learning task by presenting aversive outcomes both in the presence and in the absence of a predictive cue. Electrophysiological and optogenetic approaches revealed that amygdala neurons directly regulated and tracked the effects of ambiguity on learning. Contrary to established accounts of associative learning, however, interference from competing associations was not required to assess an ambiguous cue-outcome contingency. Instead, animals' behavior was explained by a normative account that evaluates different models of the environment's statistical structure. These findings suggest an alternative view of amygdala circuits in resolving ambiguity during aversive learning.

  7. How Does Context Play "a Part" in Splitting Words "Apart"? Production and Perception of Word Boundaries in Casual Speech

    ERIC Educational Resources Information Center

    Kim, Dahee; Stephens, Joseph D. W.; Pitt, Mark A.

    2012-01-01

    Four experiments examined listeners' segmentation of ambiguous schwa-initial sequences (e.g., "a long" vs. "along") in casual speech, where acoustic cues can be unclear, possibly increasing reliance on contextual information to resolve the ambiguity. In Experiment 1, acoustic analyses of talkers' productions showed that the one-word and two-word…

  8. Not So Black and White: Memory for Ambiguous Group Members

    PubMed Central

    Pauker, Kristin; Weisbuch, Max; Ambady, Nalini; Sommers, Samuel R; Ivcevic, Zorana; Adams, Reginald B

    2013-01-01

    Exponential increases in multi-racial identities expected over the next century, creates a conundrum for perceivers accustomed to classifying people as “own” or “other” race. The current research examines how perceivers resolve this dilemma with regard to the “own-race bias.” We hypothesized that perceivers would not be motivated to include ambiguous-race individuals in the in-group and would therefore have some difficulty remembering them. Both racially-ambiguous and other-race faces were misremembered more often than own-race faces (Study 1), though memory for ambiguous faces was improved among perceivers motivated to include biracial individuals in the in-group (Study 2). Racial labels assigned to racially ambiguous faces determined memory for these faces, suggesting that uncertainty provides the motivational context for discounting ambiguous faces in memory (Study 3). Finally, an inclusion motivation fostered cognitive associations between racially-ambiguous faces and the in-group. Moreover, the extent to which perceivers associated racially-ambiguous faces with the in-group predicted memory for ambiguous faces and accounted for the impact of motivation on memory (Study 4). Thus, memory for biracial individuals seems to involve a flexible person construal process shaped by motivational factors. PMID:19309203

  9. No Fear of Commitment: Children's Incremental Interpretation in English and Japanese Wh-Questions

    ERIC Educational Resources Information Center

    Omaki, Akira; Davidson White, Imogen; Goro, Takuya; Lidz, Jeffrey; Phillips, Colin

    2014-01-01

    Much work on child sentence processing has demonstrated that children are able to use various linguistic cues to incrementally resolve temporary syntactic ambiguities, but they fail to use syntactic or interpretability cues that arrive later in the sentence. The present study explores whether children incrementally resolve filler-gap dependencies,…

  10. Apparent ambiguities in the post-Newtonian expansion for binary systems

    NASA Astrophysics Data System (ADS)

    Porto, Rafael A.; Rothstein, Ira Z.

    2017-07-01

    We discuss the source of the apparent ambiguities arising in the calculation of the dynamics of binary black holes within the post-Newtonian framework. Divergences appear in both the near and far zone calculations, and may be of either ultraviolet (UV) or infrared (IR) nature. The effective field theory (EFT) formalism elucidates the origin of the singularities which may introduce apparent ambiguities. In particular, the only (physical) "ambiguity parameters" that necessitate a matching calculation correspond to unknown finite size effects, which first appear at fifth post-Newtonian (5PN) order for nonspinning bodies. We demonstrate that the ambiguities linked to IR divergences in the near zone, that plague the recent derivations of the binding energy at 4PN order, both in the Arnowitt, Deser, and Misner (ADM) and "Fokker-action" approach, can be resolved by implementing the so-called zero-bin subtraction in the EFT framework. The procedure yields ambiguity-free results without the need of additional information beyond the PN expansion.

  11. Performance analysis of multiple PRF technique for ambiguity resolution

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1992-01-01

    For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.

  12. A Causal Role for V5/MT Neurons Coding Motion-Disparity Conjunctions in Resolving Perceptual Ambiguity

    PubMed Central

    Krug, Kristine; Cicmil, Nela; Parker, Andrew J.; Cumming, Bruce G.

    2013-01-01

    Summary Judgments about the perceptual appearance of visual objects require the combination of multiple parameters, like location, direction, color, speed, and depth. Our understanding of perceptual judgments has been greatly informed by studies of ambiguous figures, which take on different appearances depending upon the brain state of the observer. Here we probe the neural mechanisms hypothesized as responsible for judging the apparent direction of rotation of ambiguous structure from motion (SFM) stimuli. Resolving the rotation direction of SFM cylinders requires the conjoint decoding of direction of motion and binocular depth signals [1, 2]. Within cortical visual area V5/MT of two macaque monkeys, we applied electrical stimulation at sites with consistent multiunit tuning to combinations of binocular depth and direction of motion, while the monkey made perceptual decisions about the rotation of SFM stimuli. For both ambiguous and unambiguous SFM figures, rotation judgments shifted as if we had added a specific conjunction of disparity and motion signals to the stimulus elements. This is the first causal demonstration that the activity of neurons in V5/MT contributes directly to the perception of SFM stimuli and by implication to decoding the specific conjunction of disparity and motion, the two different visual cues whose combination drives the perceptual judgment. PMID:23871244

  13. Simply Imagining Sunshine, Lollipops and Rainbows Will Not Budge the Bias: The Role of Ambiguity in Interpretive Bias Modification.

    PubMed

    Clarke, Patrick J F; Nanthakumar, Shenooka; Notebaert, Lies; Holmes, Emily A; Blackwell, Simon E; Macleod, Colin

    2014-01-01

    Imagery-based interpretive bias modification (CBM-I) involves repeatedly imagining scenarios that are initially ambiguous before being resolved as either positive or negative in the last word/s. While the presence of such ambiguity is assumed to be important to achieve change in selective interpretation, it is also possible that the act of repeatedly imagining positive or negative events could produce such change in the absence of ambiguity. The present study sought to examine whether the ambiguity in imagery-based CBM-I is necessary to elicit change in interpretive bias, or, if the emotional content of the imagined scenarios is sufficient to produce such change. An imagery-based CBM-I task was delivered to participants in one of four conditions, where the valence of imagined scenarios were either positive or negative, and the ambiguity of the scenario was either present (until the last word/s) or the ambiguity was absent (emotional valence was evident from the start). Results indicate that only those who received scenarios in which the ambiguity was present acquired an interpretive bias consistent with the emotional valence of the scenarios, suggesting that the act of imagining positive or negative events will only influence patterns of interpretation when the emotional ambiguity is a consistent feature.

  14. Automated ambiguity estimation for VLBI Intensive sessions using L1-norm

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2016-12-01

    Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.

  15. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  16. Proteomic Prediction of Breast Cancer Risk: A Cohort Study

    DTIC Science & Technology

    2007-03-01

    Total 1728 1189 68.81 (c) Data processing. Data analysis was performed using in-house software (Du P , Angeletti RH. Automatic deconvolution of...isotope-resolved mass spectra using variable selection and quantized peptide mass distribution. Anal Chem., 78:3385-92, 2006; P Du, R Sudha, MB...control. Reportable Outcomes So far our publications have been on the development of algorithms for signal processing: 1. Du P , Angeletti RH

  17. New methods for time-resolved fluorescence spectroscopy data analysis based on the Laguerre expansion technique--applications in tissue diagnosis.

    PubMed

    Jo, J A; Marcu, L; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Beseth, B; Dorafshar, A H; Reil, T; Baker, D; Freischlag, J

    2007-01-01

    A new deconvolution method for the analysis of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data is introduced and applied for tissue diagnosis. The intrinsic TR-LIFS decays are expanded on a Laguerre basis, and the computed Laguerre expansion coefficients (LEC) are used to characterize the sample fluorescence emission. The method was applied for the diagnosis of atherosclerotic vulnerable plaques. At a first stage, using a rabbit atherosclerotic model, 73 TR-LIFS in-vivo measurements from the normal and atherosclerotic aorta segments of eight rabbits were taken. The Laguerre deconvolution technique was able to accurately deconvolve the TR-LIFS measurements. More interesting, the LEC reflected the changes in the arterial biochemical composition and provided discrimination of lesions rich in macrophages/foam-cells with high sensitivity (> 85%) and specificity (> 95%). At a second stage, 348 TR-LIFS measurements were obtained from the explanted carotid arteries of 30 patients. Lesions with significant inflammatory cells (macrophages/foam-cells and lymphocytes) were detected with high sensitivity (> 80%) and specificity (> 90%), using LEC-based classifiers. This study has demonstrated the potential of using TR-LIFS information by means of LEC for in vivo tissue diagnosis, and specifically for detecting inflammation in atherosclerotic lesions, a key marker of plaque vulnerability.

  18. Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clement, G. R.; Harm, D L.; Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    2005-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Our general hypothesis is that the central nervous system utilizes both multi-sensory integration and frequency segregation as neural strategies to resolve the ambiguity of tilt and translation stimuli. Movement in an altered gravity environment, such as weightlessness without a stable gravity reference, results in new patterns of sensory cues. For example, the semicircular canals, vision and neck proprioception provide information about head tilt on orbit without the normal otolith head-tilt position that is omnipresent on Earth. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth s gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of disorientation and tilt-translation disturbances reported by crewmembers during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.

  19. Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clement, G. R.; Harm, D. L.; Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    2005-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Our general hypothesis is that the central nervous system utilizes both multi-sensory integration and frequency segregation as neural strategies to resolve the ambiguity of tilt and translation stimuli. Movement in an altered gravity environment, such as weightlessness without a stable gravity reference, results in new patterns of sensory cues. For example, the semicircular canals, vision and neck proprioception provide information about head tilt on orbit without the normal otolith head-tilt position that is omnipresent on Earth. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth's gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of disorientation and tilt-translation disturbances reported by crewmembers during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.

  20. GNSS triple-frequency geometry-free and ionosphere-free track-to-track ambiguities

    NASA Astrophysics Data System (ADS)

    Wang, Kan; Rothacher, Markus

    2015-06-01

    During the last few years, more and more GNSS satellites have become available sending signals on three or even more frequencies. Examples are the GPS Block IIF and the Galileo In-Orbit-Validation (IOV) satellites. Various investigations have been performed to make use of the increasing number of frequencies to find a compromise between eliminating different error sources and minimizing the noise level, including the investigations in the triple-frequency geometry-free (GF) and ionosphere-free (IF) linear combinations, which eliminate all the geometry-related errors and the first-order term of the ionospheric delays. In contrast to the double-difference GF and IF ambiguity resolution, the resolution of the so-called track-to-track GF and IF ambiguities between two tracks of a satellite observed by the same station only requires one receiver and one satellite. Most of the remaining errors like receiver and satellite delays (electronics, cables, etc.) are eliminated, if they are not changing rapidly in time, and the noise level is reduced theoretically by a factor of square root of two compared to double-differences. This paper presents first results concerning track-to-track ambiguity resolution using triple-frequency GF and IF linear combinations based on data from the Multi-GNSS Experiment (MGEX) from April 29 to May 9, 2012 and from December 23 to December 29, 2012. This includes triple-frequency phase and code observations with different combinations of receiver tracking modes. The results show that it is possible to resolve the combined track-to-track ambiguities of the best two triple-frequency GF and IF linear combinations for the Galileo frequency triplet E1, E5b and E5a with more than 99.6% of the fractional ambiguities for the best linear combination being located within ± 0.03 cycles and more than 98.8% of the fractional ambiguities for the second best linear combination within ± 0.2 cycles, while the fractional parts of the ambiguities for the GPS frequency triplet L1, L2 and L5 are more disturbed by errors as e.g. the uncalibrated Phase Center Offsets (PCOs) and Phase Center Variations (PCVs), that have not been considered. The best two GF and IF linear combinations between tracks are helpful to detect problems in data and receivers. Furthermore, resolving the track-to-track ambiguities is helpful to connect the single-receiver ambiguities on the normal equation level and to improve ambiguity resolution.

  1. Impact of ambiguity resolution and application of transformation parameters obtained by regional GNSS network in Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Gandolfi, S.; Poluzzi, L.; Tavasci, L.

    2012-12-01

    Precise Point Positioning (PPP) is one of the possible approaches for GNSS data processing. As known this technique is faster and more flexible compared to the others which are based on a differenced approach and constitute a reliable methods for accurate positioning of remote GNSS stations, even in some remote area such as Antarctica. Until few years ago one of the major limits of the method was the impossibility to resolve the ambiguity as integer but nowadays many methods are available to resolve this aspect. The first software package permitting a PPP solution was the GIPSY OASIS realized, developed and maintained by JPL (NASA). JPL produce also orbits and files ready to be used with GIPSY. Recently, using these products came possible to resolve ambiguities improving the stability of solutions. PPP permit to estimate position into the reference frame of the orbits (IGS) and when coordinate in others reference frames, such al ITRF, are needed is necessary to apply a transformation. Within his products JPL offer, for each day, a global 7 parameter transformation that permit to locate the survey into the ITRF RF. In some cases it's also possible to create a costumed process and obtain analogous parameters using local/regional reference network of stations which coordinates are available also in the desired reference frame. In this work some tests on accuracy has been carried out comparing different PPP solutions obtained using the same software packages (GIPSY) but considering the ambiguity resolution, the global and regional transformation parameters. In particular two test area have been considered, first one located in Antarctica and the second one in Italy. Aim of the work is the evaluation of the impact of ambiguity resolution and the use of local/regional transformation parameter in the final solutions. Tests shown how the ambiguity resolution improve the precision, especially in the EAST component with a scattering reduction about 8%. And the use of global transformation parameter permit to improve the accuracy of about 59%, 63% and 29% in the three components N E U, but other tests shown how is possible to improve the accuracy of 67% 71% and 53% using regional transformation parameters. Example of the impact of global vs regional parameters transformation in a GPS time series

  2. To mind the mind: An event-related potential study of word class and semantic ambiguity

    PubMed Central

    Lee, Chia-lin; Federmeier, Kara D.

    2009-01-01

    The goal of this study was to jointly examine the effects of word class, word class ambiguity, and semantic ambiguity on the brain response to words in syntactically specified contexts. Four types of words were used: (1) word class ambiguous words with a high degree of semantic ambiguity (e.g., ‘duck’); (2) word class ambiguous words with little or no semantic ambiguity (e.g., ‘vote’); (3) word class unambiguous nouns (e.g., ‘sofa’); and (4) word class unambiguous verbs (e.g., ‘eat’). These words were embedded in minimal phrases that explicitly specified their word class: “the” for nouns (and ambiguous words used as nouns) and “to” for verbs (and ambiguous words used as verbs). Our results replicate the basic word class effects found in prior work (Federmeier, K.D., Segal, J.B., Lombrozo, T., Kutas, M., 2000. Brain responses to nouns, verbs and class ambiguous words in context. Brain, 123 (12), 2552–2566), including an enhanced N400 (250–450ms) to nouns compared with verbs and an enhanced frontal positivity (300–700 ms) to unambiguous verbs in relation to unambiguous nouns. A sustained frontal negativity (250–900 ms) that was previously linked to word class ambiguity also appeared in this study but was specific to word class ambiguous items that also had a high level of semantic ambiguity; word class ambiguous items without semantic ambiguity, in contrast, were more positive than class unambiguous words in the early part of this time window (250–500 ms). Thus, this frontal negative effect seems to be driven by the need to resolve the semantic ambiguity that is sometimes associated with different grammatical uses of a word class ambiguous homograph rather than by the class ambiguity per se. PMID:16516169

  3. A causal role for V5/MT neurons coding motion-disparity conjunctions in resolving perceptual ambiguity.

    PubMed

    Krug, Kristine; Cicmil, Nela; Parker, Andrew J; Cumming, Bruce G

    2013-08-05

    Judgments about the perceptual appearance of visual objects require the combination of multiple parameters, like location, direction, color, speed, and depth. Our understanding of perceptual judgments has been greatly informed by studies of ambiguous figures, which take on different appearances depending upon the brain state of the observer. Here we probe the neural mechanisms hypothesized as responsible for judging the apparent direction of rotation of ambiguous structure from motion (SFM) stimuli. Resolving the rotation direction of SFM cylinders requires the conjoint decoding of direction of motion and binocular depth signals [1, 2]. Within cortical visual area V5/MT of two macaque monkeys, we applied electrical stimulation at sites with consistent multiunit tuning to combinations of binocular depth and direction of motion, while the monkey made perceptual decisions about the rotation of SFM stimuli. For both ambiguous and unambiguous SFM figures, rotation judgments shifted as if we had added a specific conjunction of disparity and motion signals to the stimulus elements. This is the first causal demonstration that the activity of neurons in V5/MT contributes directly to the perception of SFM stimuli and by implication to decoding the specific conjunction of disparity and motion, the two different visual cues whose combination drives the perceptual judgment. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Resolving the ambiguity: Making sense of intrinsic disorder when PDB structures disagree.

    PubMed

    DeForte, Shelly; Uversky, Vladimir N

    2016-03-01

    Missing regions in X-ray crystal structures in the Protein Data Bank (PDB) have played a foundational role in the study of intrinsically disordered protein regions (IDPRs), especially in the development of in silico predictors of intrinsic disorder. However, a missing region is only a weak indication of intrinsic disorder, and this uncertainty is compounded by the presence of ambiguous regions, where more than one structure of the same protein sequence "disagrees" in terms of the presence or absence of missing residues. The question is this: are these ambiguous regions intrinsically disordered, or are they the result of static disorder that arises from experimental conditions, ensembles of structures, or domain wobbling? A novel way of looking at ambiguous regions in terms of the pattern between multiple PDB structures has been demonstrated. It was found that the propensity for intrinsic disorder increases as the level of ambiguity decreases. However, it is also shown that ambiguity is more likely to occur as the protein region is placed within different environmental conditions, and even the most ambiguous regions as a set display compositional bias that suggests flexibility. The results suggested that ambiguity is a natural result for many IDPRs crystallized under different conditions and that static disorder and wobbling domains are relatively rare. Instead, it is more likely that ambiguity arises because many of these regions were conditionally or partially disordered. © 2016 The Protein Society.

  5. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    NASA Astrophysics Data System (ADS)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  6. Resolving phase ambiguities in the calibration of redundant interferometric arrays: implications for array design

    DTIC Science & Technology

    2016-05-18

    course of this paper, we will first iden- tify this ambiguity from a mathematical perspective, relate it to a particular physical structure (i.e. the...invariance to a physical condition on aperture place- ment is more intuitive when considering the raw phase mea- surements as opposed to their closures. For...to wrapping of the phase measure- ments. We have hence arrived at a a physical definition of a wrap-invariant pattern. We now apply Algorithm 1 to the

  7. A Semantic Lexicon-Based Approach for Sense Disambiguation and Its WWW Application

    NASA Astrophysics Data System (ADS)

    di Lecce, Vincenzo; Calabrese, Marco; Soldo, Domenico

    This work proposes a basic framework for resolving sense disambiguation through the use of Semantic Lexicon, a machine readable dictionary managing both word senses and lexico-semantic relations. More specifically, polysemous ambiguity characterizing Web documents is discussed. The adopted Semantic Lexicon is WordNet, a lexical knowledge-base of English words widely adopted in many research studies referring to knowledge discovery. The proposed approach extends recent works on knowledge discovery by focusing on the sense disambiguation aspect. By exploiting the structure of WordNet database, lexico-semantic features are used to resolve the inherent sense ambiguity of written text with particular reference to HTML resources. The obtained results may be extended to generic hypertextual repositories as well. Experiments show that polysemy reduction can be used to hint about the meaning of specific senses in given contexts.

  8. Patterns of Parental Authority and Adolescent Autonomy

    ERIC Educational Resources Information Center

    Baumrind, Diana

    2005-01-01

    In proposing connections among the paradigms represented by domain theory, parental control theory, and Baumrind's configural approach to parental authority, the worldview of each paradigm must be respected and ambiguities in core concepts must be resolved.

  9. A theoretical study on the bottlenecks of GPS phase ambiguity resolution in a CORS RTK Network

    NASA Astrophysics Data System (ADS)

    Odijk, D.; Teunissen, P.

    2011-01-01

    Crucial to the performance of GPS Network RTK positioning is that a user receives and applies correction information from a CORS Network. These corrections are necessary for the user to account for the atmospheric (ionospheric and tropospheric) delays and possibly orbit errors between his approximate location and the locations of the CORS Network stations. In order to provide the most precise corrections to users, the CORS Network processing should be based on integer resolution of the carrier phase ambiguities between the network's CORS stations. One of the main challenges is to reduce the convergence time, thus being able to quickly resolve the integer carrier phase ambiguities between the network's reference stations. Ideally, the network ambiguity resolution should be conducted within one single observation epoch, thus truly in real time. Unfortunately, single-epoch CORS Network RTK ambiguity resolution is currently not feasible and in the present contribution we study the bottlenecks preventing this. For current dual-frequency GPS the primary cause of these CORS Network integer ambiguity initialization times is the lack of a sufficiently large number of visible satellites. Although an increase in satellite number shortens the ambiguity convergence times, instantaneous CORS Network RTK ambiguity resolution is not feasible even with 14 satellites. It is further shown that increasing the number of stations within the CORS Network itself does not help ambiguity resolution much, since every new station introduces new ambiguities. The problem with CORS Network RTK ambiguity resolution is the presence of the atmospheric (mainly ionospheric) delays themselves and the fact that there are no external corrections that are sufficiently precise. We also show that external satellite clock corrections hardly contribute to CORS Network RTK ambiguity resolution, despite their quality, since the network satellite clock parameters and the ambiguities are almost completely uncorrelated. One positive is that the foreseen modernized GPS will have a very beneficial effect on CORS ambiguity resolution, because of an additional frequency with improved code precision.

  10. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  11. MSTAR: an absolute metrology sensor with sub-micron accuracy for space-based applications

    NASA Technical Reports Server (NTRS)

    Peters, Robert D.; Lay, Oliver P.; Dubovitsky, Serge; Burger, Johan P.; Jeganathan, Muthu

    2004-01-01

    The MSTAR sensor is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with subnanometer accuracy.

  12. Resolving a Long-Standing Ambiguity: the Non-Planarity of gauche-1,3-BUTADIENE Revealed by Microwave Spectroscopy

    NASA Astrophysics Data System (ADS)

    Martin-Drumel, Marie-Aline; McCarthy, Michael C.; Patterson, David; Eibenberger, Sandra; Buckingham, Grant; Baraban, Joshua H.; Ellison, Barney; Stanton, John F.

    2016-06-01

    The preferred conformation of cis-1,3-butadiene (CH_2=CH-CH=CH_2) has been of long-standing importance in organic chemistry because of its role in Diels-Alder transition states. The molecule could adopt a planar s-cis conformation, in favor of conjugations in the carbon chain, or a non-planar gauche conformation, as a result of steric interactions between the terminal H atoms. To resolve this ambiguity, we have now measured the pure rotational spectrum of this isomer in the microwave region, unambiguously establishing a significant inertial defect, and therefore a gauche conformation. Experimental measurements of gauche-1,3-butadiene and several of its isotopologues using cavity Fourier-transform microwave (FTMW) spectroscopy in a supersonic expansion and chirped-pulse FTMW spectroscopy in a 4 K buffer gas cell will be summarized, as will new quantum chemical calculations.

  13. Deconvolution enhanced direction of arrival estimation using one- and three-component seismic arrays applied to ocean induced microseisms

    NASA Astrophysics Data System (ADS)

    Gal, M.; Reading, A. M.; Ellingsen, S. P.; Koper, K. D.; Burlacu, R.; Gibbons, S. J.

    2016-07-01

    Microseisms in the period of 2-10 s are generated in deep oceans and near coastal regions. It is common for microseisms from multiple sources to arrive at the same time at a given seismometer. It is therefore desirable to be able to measure multiple slowness vectors accurately. Popular ways to estimate the direction of arrival of ocean induced microseisms are the conventional (fk) or adaptive (Capon) beamformer. These techniques give robust estimates, but are limited in their resolution capabilities and hence do not always detect all arrivals. One of the limiting factors in determining direction of arrival with seismic arrays is the array response, which can strongly influence the estimation of weaker sources. In this work, we aim to improve the resolution for weaker sources and evaluate the performance of two deconvolution algorithms, Richardson-Lucy deconvolution and a new implementation of CLEAN-PSF. The algorithms are tested with three arrays of different aperture (ASAR, WRA and NORSAR) using 1 month of real data each and compared with the conventional approaches. We find an improvement over conventional methods from both algorithms and the best performance with CLEAN-PSF. We then extend the CLEAN-PSF framework to three components (3C) and evaluate 1 yr of data from the Pilbara Seismic Array in northwest Australia. The 3C CLEAN-PSF analysis is capable in resolving a previously undetected Sn phase.

  14. Phonological ambiguity modulates resolution of semantic ambiguity during reading: An fMRI study of Hebrew.

    PubMed

    Bitan, Tali; Kaftory, Asaf; Meiri-Leib, Adi; Eviatar, Zohar; Peleg, Orna

    2017-10-01

    The current fMRI study examined the role of phonology in the extraction of meaning from print in each hemisphere by comparing homophonic and heterophonic homographs (ambiguous words in which both meanings have the same or different sounds respectively, e.g., bank or tear). The analysis distinguished between the first phase, in which participants read ambiguous words without context, and the second phase in which the context resolves the ambiguity. Native Hebrew readers were scanned during semantic relatedness judgments on pairs of words in which the first word was either a homophone or a heterophone and the second word was related to its dominant or subordinate meaning. In Phase 1 there was greater activation for heterophones in left inferior frontal gyrus (IFG), pars opercularis, and more activation for homophones in bilateral IFG pars orbitalis, suggesting that resolution of the conflict at the phonological level has abolished the semantic ambiguity for heterophones. Reduced activation for all ambiguous words in temporo-parietal regions suggests that although ambiguity enhances controlled lexical selection processes in frontal regions it reduces reliance on bottom-up mapping processes. After presentation of the context, a larger difference between the dominant and subordinate meaning was found for heterophones in all reading-related regions, suggesting a greater engagement for heterophones with the dominant meaning. Altogether these results are consistent with the prominent role of phonological processing in visual word recognition. Finally, despite differences in hemispheric asymmetry between homophones and heterophones, ambiguity resolution, even toward the subordinate meaning, is largely left lateralized. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Who is respectful? Effects of social context and individual empathic ability on ambiguity resolution during utterance comprehension

    PubMed Central

    Jiang, Xiaoming; Zhou, Xiaolin

    2015-01-01

    Verbal communication is often ambiguous. By employing the event-related potential (ERP) technique, this study investigated how a comprehender resolves referential ambiguity by using information concerning the social status of communicators. Participants read a conversational scenario which included a minimal conversational context describing a speaker and two other persons of the same or different social status and a directly quoted utterance. A singular, second-person pronoun in the respectful form (nin/nin-de in Chinese) in the utterance could be ambiguous with respect to which of the two persons was the addressee (the “Ambiguous condition”). Alternatively, the pronoun was not ambiguous either because one of the two persons was of higher social status and hence should be the addressee according to social convention (the “Status condition”) or because a word referring to the status of a person was additionally inserted before the pronoun to help indicate the referent of the pronoun (the “Referent condition”). Results showed that the perceived ambiguity decreased over the Ambiguous, Status, and Referent conditions. Electrophysiologically, the pronoun elicited an increased N400 in the Referent than in the Status and the Ambiguous conditions, reflecting an increased integration demand due to the necessity of linking the pronoun to both its antecedent and the status word. Relative to the Referent condition, a late, sustained positivity was elicited for the Status condition starting from 600 ms, while a more delayed, anterior negativity was elicited for the Ambiguous condition. Moreover, the N400 effect was modulated by individuals' sensitivity to the social status information, while the late positivity effect was modulated by individuals' empathic ability. These findings highlight the neurocognitive flexibility of contextual bias in referential processing during utterance comprehension. PMID:26557102

  16. On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2018-06-01

    Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.

  17. On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2017-11-01

    Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.

  18. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  19. Empirical Green's function analysis: Taking the next step

    USGS Publications Warehouse

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  20. The nature of multiple solutions for surface wind speed over the oceans from scatterometer measurements

    NASA Technical Reports Server (NTRS)

    Price, J. C.

    1975-01-01

    The satellite SEASAT-A will carry a radar scatterometer in order to measure microwave backscatter from the sea surface. From pairs of radar measurements at angles separated by 90 deg in azimuth the surface wind speed and direction may be inferred, though not uniquely. The character of the solutions for wind speed and direction is displayed, as well as the nature of the ambiguities of these solutions. An economical procedure for handling such data is described, plus a criterion for the need for conventional (surface) data in order to resolve the ambiguities of solutions.

  1. Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clement, G. R.; Rupert, A. H.; Reschke, M. F.; Harm, D. L.; Guedry, F. E.

    2007-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth s gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of tilt-translation disturbances during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.

  2. ORMAN: optimal resolution of ambiguous RNA-Seq multimappings in the presence of novel isoforms.

    PubMed

    Dao, Phuong; Numanagić, Ibrahim; Lin, Yen-Yi; Hach, Faraz; Karakoc, Emre; Donmez, Nilgun; Collins, Colin; Eichler, Evan E; Sahinalp, S Cenk

    2014-03-01

    RNA-Seq technology is promising to uncover many novel alternative splicing events, gene fusions and other variations in RNA transcripts. For an accurate detection and quantification of transcripts, it is important to resolve the mapping ambiguity for those RNA-Seq reads that can be mapped to multiple loci: >17% of the reads from mouse RNA-Seq data and 50% of the reads from some plant RNA-Seq data have multiple mapping loci. In this study, we show how to resolve the mapping ambiguity in the presence of novel transcriptomic events such as exon skipping and novel indels towards accurate downstream analysis. We introduce ORMAN ( O ptimal R esolution of M ultimapping A mbiguity of R N A-Seq Reads), which aims to compute the minimum number of potential transcript products for each gene and to assign each multimapping read to one of these transcripts based on the estimated distribution of the region covering the read. ORMAN achieves this objective through a combinatorial optimization formulation, which is solved through well-known approximation algorithms, integer linear programs and heuristics. On a simulated RNA-Seq dataset including a random subset of transcripts from the UCSC database, the performance of several state-of-the-art methods for identifying and quantifying novel transcripts, such as Cufflinks, IsoLasso and CLIIQ, is significantly improved through the use of ORMAN. Furthermore, in an experiment using real RNA-Seq reads, we show that ORMAN is able to resolve multimapping to produce coverage values that are similar to the original distribution, even in genes with highly non-uniform coverage. ORMAN is available at http://orman.sf.net

  3. SU-E-T-299: Small Fields Profiles Correction Through Detectors Spatial Response Functions and Field Size Dependence Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filipuzzi, M; Garrigo, E; Venencia, C

    2014-06-01

    Purpose: To calculate the spatial response function of various radiation detectors, to evaluate the dependence on the field size and to analyze the small fields profiles corrections by deconvolution techniques. Methods: Crossline profiles were measured on a Novalis Tx 6MV beam with a HDMLC. The configuration setup was SSD=100cm and depth=5cm. Five fields were studied (200×200mm2,100×100mm2, 20×20mm2, 10×10mm2and 5×5mm2) and measured were made with passive detectors (EBT3 radiochromic films and TLD700 thermoluminescent detectors), ionization chambers (PTW30013, PTW31003, CC04 and PTW31016) and diodes (PTW60012 and IBA SFD). The results of passive detectors were adopted as the actual beam profile. To calculatemore » the detectors kernels, modeled by Gaussian functions, an iterative process based on a least squares criterion was used. The deconvolutions of the measured profiles were calculated with the Richardson-Lucy method. Results: The profiles of the passive detectors corresponded with a difference in the penumbra less than 0.1mm. Both diodes resolve the profiles with an overestimation of the penumbra smaller than 0.2mm. For the other detectors, response functions were calculated and resulted in Gaussian functions with a standard deviation approximate to the radius of the detector in study (with a variation less than 3%). The corrected profiles resolve the penumbra with less than 1% error. Major discrepancies were observed for cases in extreme conditions (PTW31003 and 5×5mm2 field size). Conclusion: This work concludes that the response function of a radiation detector is independent on the field size, even for small radiation beams. The profiles correction, using deconvolution techniques and response functions of standard deviation equal to the radius of the detector, gives penumbra values with less than 1% difference to the real profile. The implementation of this technique allows estimating the real profile, freeing from the effects of the detector used for the acquisition.« less

  4. The Levels of "Rappaccini's Daughter."

    ERIC Educational Resources Information Center

    Hands, Charles B.

    1970-01-01

    Nathaniel Hawthorne's short story "Rappaccini's Daughter" reflects the author's view that inherent in the human dilemma are ambiguous ironies which cannot be resolved. Although Hawthorne (unlike Ralph Waldo Emerson) perceives evil as an extraordinarily potent force, he offers no clear moral solutions in this story, but examines various…

  5. Triaxial ellipsoid dimensions and rotational poles of seven asteroids from Lick Observatory adaptive optics images, and of Ceres

    NASA Astrophysics Data System (ADS)

    Drummond, Jack; Christou, Julian

    2008-10-01

    Seven main belt asteroids, 2 Pallas, 3 Juno, 4 Vesta, 16 Psyche, 87 Sylvia, 324 Bamberga, and 707 Interamnia, were imaged with the adaptive optics system on the 3 m Shane telescope at Lick Observatory in the near infrared, and their triaxial ellipsoid dimensions and rotational poles have been determined with parametric blind deconvolution. In addition, the dimensions and pole for 1 Ceres are derived from resolved images at multiple epochs, even though it is an oblate spheroid.

  6. Shear Recovery Accuracy in Weak-Lensing Analysis with the Elliptical Gauss-Laguerre Method

    NASA Astrophysics Data System (ADS)

    Nakajima, Reiko; Bernstein, Gary

    2007-04-01

    We implement the elliptical Gauss-Laguerre (EGL) galaxy-shape measurement method proposed by Bernstein & Jarvis and quantify the shear recovery accuracy in weak-lensing analysis. This method uses a deconvolution fitting scheme to remove the effects of the point-spread function (PSF). The test simulates >107 noisy galaxy images convolved with anisotropic PSFs and attempts to recover an input shear. The tests are designed to be immune to statistical (random) distributions of shapes, selection biases, and crowding, in order to test more rigorously the effects of detection significance (signal-to-noise ratio [S/N]), PSF, and galaxy resolution. The systematic error in shear recovery is divided into two classes, calibration (multiplicative) and additive, with the latter arising from PSF anisotropy. At S/N > 50, the deconvolution method measures the galaxy shape and input shear to ~1% multiplicative accuracy and suppresses >99% of the PSF anisotropy. These systematic errors increase to ~4% for the worst conditions, with poorly resolved galaxies at S/N simeq 20. The EGL weak-lensing analysis has the best demonstrated accuracy to date, sufficient for the next generation of weak-lensing surveys.

  7. Pourquoi le francais et quel francais au Maroc? (Why French and Which French in Morocco?)

    ERIC Educational Resources Information Center

    Akouaou, Ahmed

    1984-01-01

    The status of French in Morocco is ambiguous: it is neither an official language nor a foreign language, and it would benefit greatly from an official definition that would allow a variety of language conflicts to be resolved. (MSE)

  8. Ambiguity's aftermath: how age differences in resolving lexical ambiguity affect subsequent comprehension.

    PubMed

    Lee, Chia-lin; Federmeier, Kara D

    2012-04-01

    When ambiguity resolution is difficult, younger adults recruit selection-related neural resources that older adults do not. To elucidate the nature of those resources and the consequences of their recruitment for subsequent comprehension, we embedded noun/verb homographs and matched unambiguous words in syntactically well-specified but semantically neutral sentences. Target words were followed by a prepositional phrase whose head noun was plausible for only one meaning of the homograph. Replicating past findings, younger but not older adults elicited sustained frontal negativity to homographs compared to unambiguous words. On the subsequent head nouns, younger adults showed plausibility effects in all conditions, attesting to successful meaning selection through suppression. In contrast, older adults showed smaller plausibility effects following ambiguous words and failed to show plausibility effects when the context picked out the homograph's non-dominant meaning (i.e., they did not suppress the contextually-irrelevant dominant meaning). Meaning suppression processes, reflected in the frontal negativity, thus become less available with age, with consequences for subsequent comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Analytical Review of Contemporary Fatwas in Resolving Biomedical Issues Over Gender Ambiguity.

    PubMed

    Zabidi, Taqwa

    2018-04-21

    Issues of gender ambiguity have been discussed over time from both Islamic and medical perspectives. In Islam, these issues are typically considered in the context of khunūthah (literally translated as hermaphroditism). While biomedical studies have appeared to provide a large amount of information on abnormal human biological development, i.e. Disorders of Sex Development (DSDs). However, the connection between these two fields has been given little attention. This research aims to determine the Islamic underpinnings through the fatwa around the globe. Thus, institutional fatwa organisations among Sunni schools of thought at the international, regional and national levels are observed. The fatwas regarding the management of individuals with gender ambiguity, not specifically on DSDs, are chosen and presented accordingly. Based on the findings, the sporadic fatwas from different parts of the world delineate the issue of sex ambiguity and seem to be able to provide general guidelines for management of Muslim patients with DSDs. Three common aspects have been discussed including the methodology of gender assignment, the decision-making process and the surgical and hormonal treatments.

  10. A Nonlinear Interactions Approximation Model for Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Haliloglu, Mehmet U.; Akhavan, Rayhaneh

    2003-11-01

    A new approach to LES modelling is proposed based on direct approximation of the nonlinear terms \\overlineu_iuj in the filtered Navier-Stokes equations, instead of the subgrid-scale stress, τ_ij. The proposed model, which we call the Nonlinear Interactions Approximation (NIA) model, uses graded filters and deconvolution to parameterize the local interactions across the LES cutoff, and a Smagorinsky eddy viscosity term to parameterize the distant interactions. A dynamic procedure is used to determine the unknown eddy viscosity coefficient, rendering the model free of adjustable parameters. The proposed NIA model has been applied to LES of turbulent channel flows at Re_τ ≈ 210 and Re_τ ≈ 570. The results show good agreement with DNS not only for the mean and resolved second-order turbulence statistics but also for the full (resolved plus subgrid) Reynolds stress and turbulence intensities.

  11. Tracking sperm whales with a towed acoustic vector sensor.

    PubMed

    Thode, Aaron; Skinner, Jeff; Scott, Pam; Roswell, Jeremy; Straley, Janice; Folkert, Kendall

    2010-11-01

    Passive acoustic towed linear arrays are increasingly used to detect marine mammal sounds during mobile anthropogenic activities. However, these arrays cannot resolve between signals arriving from the port or starboard without vessel course changes or multiple cable deployments, and their performance is degraded by vessel self-noise and non-acoustic mechanical vibration. In principle acoustic vector sensors can resolve these directional ambiguities, as well as flag the presence of non-acoustic contamination, provided that the vibration-sensitive sensors can be successfully integrated into compact tow modules. Here a vector sensor module attached to the end of a 800 m towed array is used to detect and localize 1813 sperm whale "clicks" off the coast of Sitka, AK. Three methods were used to identify frequency regimes relatively free of non-acoustic noise contamination, and then the active intensity (propagating energy) of the signal was computed between 4-10 kHz along three orthogonal directions, providing unambiguous bearing estimates of two sperm whales over time. These bearing estimates are consistent with those obtained via conventional methods, but the standard deviations of the vector sensor bearing estimates are twice those of the conventionally-derived bearings. The resolved ambiguities of the bearings deduced from vessel course changes match the vector sensor predictions.

  12. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  13. Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.

    PubMed

    Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan

    2016-12-01

    The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.

  14. Sensorimotor Adaptations Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Harm, D. L.; Reschke, M. F.; Rupert, A. H.; Clement, G. R.

    2009-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. We hypothesize that multi-sensory integration will be adaptively optimized in altered gravity environments based on the dynamics of other sensory information available, with greater changes in otolith-mediated responses in the mid-frequency range where there is a crossover of tilt and translation responses. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of tilt-translation disturbances during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation.

  15. Unbiased in-depth characterization of CEX fractions from a stressed monoclonal antibody by mass spectrometry.

    PubMed

    Griaud, François; Denefeld, Blandine; Lang, Manuel; Hensinger, Héloïse; Haberl, Peter; Berg, Matthias

    2017-07-01

    Characterization of charge-based variants by mass spectrometry (MS) is required for the analytical development of a new biologic entity and its marketing approval by health authorities. However, standard peak-based data analysis approaches are time-consuming and biased toward the detection, identification, and quantification of main variants only. The aim of this study was to characterize in-depth acidic and basic species of a stressed IgG1 monoclonal antibody using comprehensive and unbiased MS data evaluation tools. Fractions collected from cation ion exchange (CEX) chromatography were analyzed as intact, after reduction of disulfide bridges, and after proteolytic cleavage using Lys-C. Data of both intact and reduced samples were evaluated consistently using a time-resolved deconvolution algorithm. Peptide mapping data were processed simultaneously, quantified and compared in a systematic manner for all MS signals and fractions. Differences observed between the fractions were then further characterized and assigned. Time-resolved deconvolution enhanced pattern visualization and data interpretation of main and minor modifications in 3-dimensional maps across CEX fractions. Relative quantification of all MS signals across CEX fractions before peptide assignment enabled the detection of fraction-specific chemical modifications at abundances below 1%. Acidic fractions were shown to be heterogeneous, containing antibody fragments, glycated as well as deamidated forms of the heavy and light chains. In contrast, the basic fractions contained mainly modifications of the C-terminus and pyroglutamate formation at the N-terminus of the heavy chain. Systematic data evaluation was performed to investigate multiple data sets and comprehensively extract main and minor differences between each CEX fraction in an unbiased manner.

  16. Accurate Drift Time Determination by Traveling Wave Ion Mobility Spectrometry: The Concept of the Diffusion Calibration.

    PubMed

    Kune, Christopher; Far, Johann; De Pauw, Edwin

    2016-12-06

    Ion mobility spectrometry (IMS) is a gas phase separation technique, which relies on differences in collision cross section (CCS) of ions. Ionic clouds of unresolved conformers overlap if the CCS difference is below the instrumental resolution expressed as CCS/ΔCCS. The experimental arrival time distribution (ATD) peak is then a superimposition of the various contributions weighted by their relative intensities. This paper introduces a strategy for accurate drift time determination using traveling wave ion mobility spectrometry (TWIMS) of poorly resolved or unresolved conformers. This method implements through a calibration procedure the link between the peak full width at half-maximum (fwhm) and the drift time of model compounds for wide range of settings for wave heights and velocities. We modified a Gaussian equation, which achieves the deconvolution of ATD peaks where the fwhm is fixed according to our calibration procedure. The new fitting Gaussian equation only depends on two parameters: The apex of the peak (A) and the mean drift time value (μ). The standard deviation parameter (correlated to fwhm) becomes a function of the drift time. This correlation function between μ and fwhm is obtained using the TWIMS calibration procedure which determines the maximum instrumental ion beam diffusion under limited and controlled space charge effect using ionic compounds which are detected as single conformers in the gas phase. This deconvolution process has been used to highlight the presence of poorly resolved conformers of crown ether complexes and peptides leading to more accurate CCS determinations in better agreement with quantum chemistry predictions.

  17. Demystifying the Cost Estimation Process

    ERIC Educational Resources Information Center

    Obi, Samuel C.

    2010-01-01

    In manufacturing today, nothing is more important than giving a customer a clear and straight-forward accounting of what their money has purchased. Many potentially promising return business orders are lost because of unclear, ambiguous, or improper billing. One of the best ways of resolving cost bargaining conflicts is by providing a…

  18. Children with SLI Exhibit Delays Resolving Ambiguous Reference

    ERIC Educational Resources Information Center

    Estis, Julie M.; Beverly, Brenda L.

    2015-01-01

    Fast mapping weaknesses in children with specific language impairment (SLI) may be explained by differences in disambiguation, mapping an unknown word to an unnamed object. The impact of language ability and linguistic stimulus on disambiguation was investigated. Sixteen children with SLI (8 preschool, 8 school-age) and sixteen typically…

  19. A Survey of the High Order Multiplicity of Nearby Solar-Type Binary Stars with Robo-AO

    DTIC Science & Technology

    2015-01-20

    auxiliary images are not used for astrometry or photometry , but are helpful for verifying compan- ion detection and for resolving the 180◦ ambiguity of...pair Ba,Bb was resolved by Robo-AO three times at 0.′′16 with Δi = 0.87m, Δr = 0.97m, and Δz = 0.52m. This corresponds to a mass for Bb of ∼0.6M. We...known quintuple system. The component E (STF 2032AE, E=HIP 79551=GJ 615.2C) is resolved here at 0.′′4 (but not for the first time : Ea,Eb=YSC 152

  20. Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.

    PubMed

    Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping

    2013-06-21

    In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.

  1. Spectroscopy of disordered low-field sites in Cr3+: Mullite glass ceramic

    NASA Astrophysics Data System (ADS)

    Knutson, Robert; Liu, Huimin; Yen, W. M.; Morgan, T. V.

    1989-09-01

    In this article we present results of optical and ESR studies that have allowed us to study the behavior of Cr3+ at disordered low-field sites within a mullite ceramic host. The results indicate that the existence of these low-field ions, which are likely at sites in regions of disorder, accounts for most of the spectroscopic anomalies previously noted in these materials. Furthermore, energy transfer from ordered high-field to disordered low-field ions is observed. The resulting complex spectra are deconvoluted by means of the recently developed technique of saturation-resolved fluorescence spectroscopy.

  2. Signal processing for ION mobility spectrometers

    NASA Technical Reports Server (NTRS)

    Taylor, S.; Hinton, M.; Turner, R.

    1995-01-01

    Signal processing techniques for systems based upon Ion Mobility Spectrometry will be discussed in the light of 10 years of experience in the design of real-time IMS. Among the topics to be covered are compensation techniques for variations in the number density of the gas - the use of an internal standard (a reference peak) or pressure and temperature sensors. Sources of noise and methods for noise reduction will be discussed together with resolution limitations and the ability of deconvolution techniques to improve resolving power. The use of neural networks (either by themselves or as a component part of a processing system) will be reviewed.

  3. Prosodic Disambiguation of Syntactic Structure: For the Speaker or for the Addressee?

    ERIC Educational Resources Information Center

    Kraljic, Tanya; Brennan, Susan E.

    2005-01-01

    Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so ''for'' their addressees (the "audience design" hypothesis) or ''for'' themselves, as a by-product of planning and articulating utterances. Three…

  4. Measuring the complex permittivity of thin grain samples by the free-space transmission technique

    USDA-ARS?s Scientific Manuscript database

    In this paper, a numerical method for solving a higherorder model that relates the measured transmission coefficient to the permittivity of a material is used to determine the permittivity of thin grain samples. A method for resolving the phase ambiguity of the transmission coefficient is presented....

  5. Children's Questions: A Mechanism for Cognitive Development

    ERIC Educational Resources Information Center

    Chouinard, Michelle M.

    2007-01-01

    Preschoolers' questions may play an important role in cognitive development. When children encounter a problem with their current knowledge state (a gap in their knowledge, some ambiguity they do not know how to resolve, some inconsistency they have detected), asking a question allows them to get targeted information exactly when they need it.…

  6. 75 FR 56868 - Implementation of the Satellite Television Extension and Localism Act of 2010

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-17

    .... (``Subsection (c) resolves the phantom signal ambiguity that required cable systems to pay royalty fees for... distant signals to some but not all communities to calculate royalty fees on the basis of the actual...'s computation of its royalty fee consistent with the methodology described in subparagraph (C)(iii...

  7. Diagnosis of vulnerable atherosclerotic plaques by time-resolved fluorescence spectroscopy and ultrasound imaging.

    PubMed

    Jo, J A; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Beseth, B; Dorafshar, A H; Reil, T; Baker, D; Freischlag, J; Shung, K K; Sun, L; Marcu, L

    2006-01-01

    In this study, time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) and ultrasonography were applied to detect vulnerable (high-risk) atherosclerotic plaque. A total of 813 TR-LIFS measurements were taken from carotid plaques of 65 patients, and subsequently analyzed using the Laguerre deconvolution technique. The investigated spots were classified by histopathology as thin, fibrotic, calcified, low-inflamed, inflamed and necrotic lesions. Spectral and time-resolved parameters (normalized intensity values and Laguerre expansion coefficients) were extracted from the TR-LIFS data. Feature selection for classification was performed by either analysis of variance (ANOVA) or principal component analysis (PCA). A stepwise linear discriminant analysis algorithm was developed for detecting inflamed and necrotic lesion, representing the most vulnerable plaques. These vulnerable plaques were detected with high sensitivity (>80%) and specificity (>90%). Ultrasound (US) imaging was obtained in 4 carotid plaques in addition to TR-LIFS examination. Preliminary results indicate that US provides important structural information of the plaques that could be combined with the compositional information obtained by TR-LIFS, to obtain a more accurate diagnosis of vulnerable atherosclerotic plaque.

  8. Three-frequency BDS precise point positioning ambiguity resolution based on raw observables

    NASA Astrophysics Data System (ADS)

    Li, Pan; Zhang, Xiaohong; Ge, Maorong; Schuh, Harald

    2018-02-01

    All BeiDou navigation satellite system (BDS) satellites are transmitting signals on three frequencies, which brings new opportunity and challenges for high-accuracy precise point positioning (PPP) with ambiguity resolution (AR). This paper proposes an effective uncalibrated phase delay (UPD) estimation and AR strategy which is based on a raw PPP model. First, triple-frequency raw PPP models are developed. The observation model and stochastic model are designed and extended to accommodate the third frequency. Then, the UPD is parameterized in raw frequency form while estimating with the high-precision and low-noise integer linear combination of float ambiguity which are derived by ambiguity decorrelation. Third, with UPD corrected, the LAMBDA method is used for resolving full or partial ambiguities which can be fixed. This method can be easily and flexibly extended for dual-, triple- or even more frequency. To verify the effectiveness and performance of triple-frequency PPP AR, tests with real BDS data from 90 stations lasting for 21 days were performed in static mode. Data were processed with three strategies: BDS triple-frequency ambiguity-float PPP, BDS triple-frequency PPP with dual-frequency (B1/B2) and three-frequency AR, respectively. Numerous experiment results showed that compared with the ambiguity-float solution, the performance in terms of convergence time and positioning biases can be significantly improved by AR. Among three groups of solutions, the triple-frequency PPP AR achieved the best performance. Compared with dual-frequency AR, additional the third frequency could apparently improve the position estimations during the initialization phase and under constraint environments when the dual-frequency PPP AR is limited by few satellite numbers.

  9. A circular median filter approach for resolving directional ambiguities in wind fields retrieved from spaceborne scatterometer data

    NASA Technical Reports Server (NTRS)

    Schultz, Howard

    1990-01-01

    The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.

  10. The spatial-temporal ambiguity in auroral modeling

    NASA Technical Reports Server (NTRS)

    Rees, M. H.; Roble, R. G.; Kopp, J.; Abreu, V. J.; Rusch, D. W.; Brace, L. H.; Brinton, H. C.; Hoffman, R. A.; Heelis, R. A.; Kayser, D. C.

    1980-01-01

    The paper examines the time-dependent models of the aurora which show that various ionospheric parameters respond to the onset of auroral ionization with different time histories. A pass of the Atmosphere Explorer C satellite over Poker Flat, Alaska, and ground based photometric and photographic observations have been used to resolve the time-space ambiguity of a specific auroral event. The density of the O(+), NO(+), O2(+), and N2(+) ions, the electron density, and the electron temperature observed at 280 km altitude in a 50 km wide segment of an auroral arc are predicted by the model if particle precipitation into the region commenced about 11 min prior to the overpass.

  11. Real-Time GNSS-Based Attitude Determination in the Measurement Domain.

    PubMed

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-02-05

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.

  12. Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.

    PubMed

    Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason

    2017-07-01

    Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.

  13. TH-AB-209-03: Overcoming Resolution Limitations of Diffuse Optical Signals in X-Ray Induced Luminescence (XIL) Imaging Via Selective Plane Illumination and 2D Deconvolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quigley, B; Smith, C; La Riviere, P

    2016-06-15

    Purpose: To evaluate the resolution and sensitivity of XIL imaging using a surface radiance simulation based on optical diffusion and maximum likelihood expectation maximization (MLEM) image reconstruction. XIL imaging seeks to determine the distribution of luminescent nanophosphors, which could be used as nanodosimeters or radiosensitizers. Methods: The XIL simulation generated a homogeneous slab with optical properties similar to tissue. X-ray activated nanophosphors were placed at 1.0 cm depth in the tissue in concentrations of 10{sup −4} g/mL in two volumes of 10 mm{sup 3} with varying separations between each other. An analytical optical diffusion model determined the surface radiance frommore » the photon distributions generated at depth in the tissue by the nanophosphors. The simulation then determined the detected luminescent signal collected with a f/1.0 aperture lens and back-illuminated EMCCD camera. The surface radiance was deconvolved using a MLEM algorithm to estimate the nanophosphors distribution and the resolution. To account for both Poisson and Gaussian noise, a shifted Poisson imaging model was used in the deconvolution. The deconvolved distributions were fitted to a Gaussian after radial averaging to measure the full width at half maximum (FWHM) and the peak to peak distance between distributions was measured to determine the resolving power. Results: Simulated surface radiances for doses from 1mGy to 100 cGy were computed. Each image was deconvolved using 1000 iterations. At 1mGy, deconvolution reduced the FWHM of the nanophosphors distribution by 65% and had a resolving power is 3.84 mm. Decreasing the dose from 100 cGy to 1 mGy increased the FWHM by 22% but allowed for a dose reduction of a factor of 1000. Conclusion: Deconvolving the detected surface radiance allows for dose reduction while maintaining the resolution of the nanophosphors. It proves to be a useful technique in overcoming the resolution limitations of diffuse optical imaging in tissue. C. S. acknowledges support from the NIH National Institute of General Medical Sciences (Award number R25GM109439, Project Title: University of Chicago Initiative for Maximizing Student Development, IMSD). B. Q. and P. L. acknowledge support from NIH grant R01EB017293.« less

  14. Improving Ambiguity Resolution for Medium Baselines Using Combined GPS and BDS Dual/Triple-Frequency Observations.

    PubMed

    Gao, Wang; Gao, Chengfa; Pan, Shuguo; Wang, Denghui; Deng, Jiadong

    2015-10-30

    The regional constellation of the BeiDou navigation satellite system (BDS) has been providing continuous positioning, navigation and timing services since 27 December 2012, covering China and the surrounding area. Real-time kinematic (RTK) positioning with combined BDS and GPS observations is feasible. Besides, all satellites of BDS can transmit triple-frequency signals. Using the advantages of multi-pseudorange and carrier observations from multi-systems and multi-frequencies is expected to be of much benefit for ambiguity resolution (AR). We propose an integrated AR strategy for medium baselines by using the combined GPS and BDS dual/triple-frequency observations. In the method, firstly the extra-wide-lane (EWL) ambiguities of triple-frequency system, i.e., BDS, are determined first. Then the dual-frequency WL ambiguities of BDS and GPS were resolved with the geometry-based model by using the BDS ambiguity-fixed EWL observations. After that, basic (i.e., L1/L2 or B1/B2) ambiguities of BDS and GPS are estimated together with the so-called ionosphere-constrained model, where the ambiguity-fixed WL observations are added to enhance the model strength. During both of the WL and basic AR, a partial ambiguity fixing (PAF) strategy is adopted to weaken the negative influence of new-rising or low-elevation satellites. Experiments were conducted and presented, in which the GPS/BDS dual/triple-frequency data were collected in Nanjing and Zhengzhou of China, with the baseline distance varying from about 28.6 to 51.9 km. The results indicate that, compared to the single triple-frequency BDS system, the combined system can significantly enhance the AR model strength, and thus improve AR performance for medium baselines with a 75.7% reduction of initialization time on average. Besides, more accurate and stable positioning results can also be derived by using the combined GPS/BDS system.

  15. Improving Ambiguity Resolution for Medium Baselines Using Combined GPS and BDS Dual/Triple-Frequency Observations

    PubMed Central

    Gao, Wang; Gao, Chengfa; Pan, Shuguo; Wang, Denghui; Deng, Jiadong

    2015-01-01

    The regional constellation of the BeiDou navigation satellite system (BDS) has been providing continuous positioning, navigation and timing services since 27 December 2012, covering China and the surrounding area. Real-time kinematic (RTK) positioning with combined BDS and GPS observations is feasible. Besides, all satellites of BDS can transmit triple-frequency signals. Using the advantages of multi-pseudorange and carrier observations from multi-systems and multi-frequencies is expected to be of much benefit for ambiguity resolution (AR). We propose an integrated AR strategy for medium baselines by using the combined GPS and BDS dual/triple-frequency observations. In the method, firstly the extra-wide-lane (EWL) ambiguities of triple-frequency system, i.e., BDS, are determined first. Then the dual-frequency WL ambiguities of BDS and GPS were resolved with the geometry-based model by using the BDS ambiguity-fixed EWL observations. After that, basic (i.e., L1/L2 or B1/B2) ambiguities of BDS and GPS are estimated together with the so-called ionosphere-constrained model, where the ambiguity-fixed WL observations are added to enhance the model strength. During both of the WL and basic AR, a partial ambiguity fixing (PAF) strategy is adopted to weaken the negative influence of new-rising or low-elevation satellites. Experiments were conducted and presented, in which the GPS/BDS dual/triple-frequency data were collected in Nanjing and Zhengzhou of China, with the baseline distance varying from about 28.6 to 51.9 km. The results indicate that, compared to the single triple-frequency BDS system, the combined system can significantly enhance the AR model strength, and thus improve AR performance for medium baselines with a 75.7% reduction of initialization time on average. Besides, more accurate and stable positioning results can also be derived by using the combined GPS/BDS system. PMID:26528977

  16. Unbiased in-depth characterization of CEX fractions from a stressed monoclonal antibody by mass spectrometry

    PubMed Central

    Griaud, François; Denefeld, Blandine; Lang, Manuel; Hensinger, Héloïse; Haberl, Peter; Berg, Matthias

    2017-01-01

    ABSTRACT Characterization of charge-based variants by mass spectrometry (MS) is required for the analytical development of a new biologic entity and its marketing approval by health authorities. However, standard peak-based data analysis approaches are time-consuming and biased toward the detection, identification, and quantification of main variants only. The aim of this study was to characterize in-depth acidic and basic species of a stressed IgG1 monoclonal antibody using comprehensive and unbiased MS data evaluation tools. Fractions collected from cation ion exchange (CEX) chromatography were analyzed as intact, after reduction of disulfide bridges, and after proteolytic cleavage using Lys-C. Data of both intact and reduced samples were evaluated consistently using a time-resolved deconvolution algorithm. Peptide mapping data were processed simultaneously, quantified and compared in a systematic manner for all MS signals and fractions. Differences observed between the fractions were then further characterized and assigned. Time-resolved deconvolution enhanced pattern visualization and data interpretation of main and minor modifications in 3-dimensional maps across CEX fractions. Relative quantification of all MS signals across CEX fractions before peptide assignment enabled the detection of fraction-specific chemical modifications at abundances below 1%. Acidic fractions were shown to be heterogeneous, containing antibody fragments, glycated as well as deamidated forms of the heavy and light chains. In contrast, the basic fractions contained mainly modifications of the C-terminus and pyroglutamate formation at the N-terminus of the heavy chain. Systematic data evaluation was performed to investigate multiple data sets and comprehensively extract main and minor differences between each CEX fraction in an unbiased manner. PMID:28379786

  17. Active inference and learning.

    PubMed

    Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni

    2016-09-01

    This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Children's Interpretation of Ambiguous Behavior: Evidence for a "Boys Are Bad" Bias.

    ERIC Educational Resources Information Center

    Heyman, Gail D.

    2001-01-01

    Investigated whether 7- to 9-year-olds use gender category to resolve uncertainty when evaluating behavior. Subjects were shown pictures of unfamiliar children and were told that each had performed a behavior open to multiple interpretations. When the unfamiliar peers were male, both male and female subjects were more likely to remember behaviors…

  19. Morphological Decomposition in the Recognition of Prefixed and Suffixed Words: Evidence from Korean

    ERIC Educational Resources Information Center

    Kim, Say Young; Wang, Min; Taft, Marcus

    2015-01-01

    Korean has visually salient syllable units that are often mapped onto either prefixes or suffixes in derived words. In addition, prefixed and suffixed words may be processed differently given a left-to-right parsing procedure and the need to resolve morphemic ambiguity in prefixes in Korean. To test this hypothesis, four experiments using the…

  20. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  1. Hybrid Imaging for Extended Depth of Field Microscopy

    NASA Astrophysics Data System (ADS)

    Zahreddine, Ramzi Nicholas

    An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.

  2. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  3. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  4. Electrochemical force microscopy

    DOEpatents

    Kalinin, Sergei V.; Jesse, Stephen; Collins, Liam F.; Rodriguez, Brian J.

    2017-01-10

    A system and method for electrochemical force microscopy are provided. The system and method are based on a multidimensional detection scheme that is sensitive to forces experienced by a biased electrode in a solution. The multidimensional approach allows separation of fast processes, such as double layer charging, and charge relaxation, and slow processes, such as diffusion and faradaic reactions, as well as capturing the bias dependence of the response. The time-resolved and bias measurements can also allow probing both linear (small bias range) and non-linear (large bias range) electrochemical regimes and potentially the de-convolution of charge dynamics and diffusion processes from steric effects and electrochemical reactivity.

  5. ddClone: joint statistical inference of clonal populations from single cell and bulk tumour sequencing data.

    PubMed

    Salehi, Sohrab; Steif, Adi; Roth, Andrew; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P

    2017-03-01

    Next-generation sequencing (NGS) of bulk tumour tissue can identify constituent cell populations in cancers and measure their abundance. This requires computational deconvolution of allelic counts from somatic mutations, which may be incapable of fully resolving the underlying population structure. Single cell sequencing (SCS) is a more direct method, although its replacement of NGS is impeded by technical noise and sampling limitations. We propose ddClone, which analytically integrates NGS and SCS data, leveraging their complementary attributes through joint statistical inference. We show on real and simulated datasets that ddClone produces more accurate results than can be achieved by either method alone.

  6. Analysis of thin fractures with GPR: from theory to practice

    NASA Astrophysics Data System (ADS)

    Arosio, Diego; Zanzi, Luigi; Longoni, Laura; Papini, Monica

    2017-04-01

    Whenever we perform a GPR survey to investigate a rocky medium, being the ultimate purpose of the survey either to study the stability of a rock slope or to determine the soundness of a quarried rock block, we would like mainly to detect any fracture within the investigated medium and, possibly, to estimate the parameters of the fractures, namely thickness and filling material. In most of the practical cases, rock fracture thicknesses are very small when compared to the wavelength of the electromagnetic radiation generated by the GPR systems. In such cases, fractures are to be considered as thin beds, i.e. two interfaces whose distance is smaller than GPR resolving capability, and the reflected signal is the sum of the electromagnetic reverberation within the bed. According to this, fracture parameters are encoded in the thin bed complex response and in this work we propose a methodology based on deterministic deconvolution to process amplitude and phase information in the frequency domain to estimate fracture parameters. We first present some theoretical aspects related to thin bed response and a sensitivity analysis concerning fracture thickness and filling. Secondly, we deal with GPR datasets collected both during laboratory experiments and in the facilities of quarrying activities. In the lab tests fractures were simulated by placing materials with known electromagnetic parameters and controlled thickness in between two small marble blocks, whereas field GPR surveys were performed on bigger quarried ornamental stone blocks before they were submitted to the cutting process. We show that, with basic pre-processing and the choice of a proper deconvolving signal, results are encouraging although an ambiguity between thickness and filling estimates exists when no a-priori information is available. Results can be improved by performing CMP radar surveys that are able to provide additional information (i.e., variation of thin bed response versus offset) at the expense of acquisition effort and of more complex and tricky pre-processing sequences.

  7. Segmentation of the mouse fourth deep lumbrical muscle connectome reveals concentric organisation of motor units

    PubMed Central

    Hirst, Theodore C; Ribchester, Richard R

    2013-01-01

    Connectomic analysis of the nervous system aims to discover and establish principles that underpin normal and abnormal neural connectivity and function. Here we performed image analysis of motor unit connectivity in the fourth deep lumbrical muscle (4DL) of mice, using transgenic expression of fluorescent protein in motor neurones as a morphological reporter. We developed a method that accelerated segmentation of confocal image projections of 4DL motor units, by applying high resolution (63×, 1.4 NA objective) imaging or deconvolution only where either proved necessary, in order to resolve axon crossings that produced ambiguities in the correct assignment of axon terminals to identified motor units imaged at lower optical resolution (40×, 1.3 NA). The 4DL muscles contained between 4 and 9 motor units and motor unit sizes ranged in distribution from 3 to 111 motor nerve terminals per unit. Several structural properties of the motor units were consistent with those reported in other muscles, including suboptimal wiring length and distribution of motor unit size. Surprisingly, however, small motor units were confined to a region of the muscle near the nerve entry point, whereas their larger counterparts were progressively more widely dispersed, suggesting a previously unrecognised form of segregated motor innervation in this muscle. We also found small but significant differences in variance of motor endplate length in motor units, which correlated weakly with their motor unit size. Thus, our connectomic analysis has revealed a pattern of concentric innervation that may perhaps also exist in other, cylindrical muscles that have not previously been thought to show segregated motor unit organisation. This organisation may be the outcome of competition during postnatal development based on intrinsic neuronal differences in synaptic size or synaptic strength that generates a territorial hierarchy in motor unit size and disposition. PMID:23940381

  8. Deconvolution method for accurate determination of overlapping peak areas in chromatograms.

    PubMed

    Nelson, T J

    1991-12-20

    A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.

  9. Laser System for Precise, Unambiguous Range Measurements

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, Oliver

    2005-01-01

    The Modulation Sideband Technology for Absolute Range (MSTAR) architecture is the basis of design of a proposed laser-based heterodyne interferometer that could measure a range (distance) as great as 100 km with a precision and resolution of the order of 1 nm. Simple optical interferometers can measure changes in range with nanometer resolution, but cannot measure range itself because interference is subject to the well-known integer-multiple-of-2 -radians phase ambiguity, which amounts to a range ambiguity of the order of 1 m at typical laser wavelengths. Existing rangefinders have a resolution of the order of 10 m and are therefore unable to resolve the ambiguity. The proposed MSTAR architecture bridges the gap, enabling nanometer resolution with an ambiguity range that can be extended to arbitrarily large distances. The MSTAR architecture combines the principle of the heterodyne interferometer with the principle of extending the ambiguity range of an interferometer by using light of two wavelengths. The use of two wavelengths for this purpose is well established in optical metrology, radar, and sonar. However, unlike in traditional two-color laser interferometry, light of two wavelengths would not be generated by two lasers. Instead, multiple wavelengths would be generated as sidebands of phase modulation of the light from a single frequency- stabilized laser. The phase modulation would be effected by applying sinusoidal signals of suitable frequencies (typically tens of gigahertz) to high-speed electro-optical phase modulators. Intensity modulation can also be used

  10. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    PubMed

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  11. Real-Time GNSS-Based Attitude Determination in the Measurement Domain

    PubMed Central

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-01-01

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434

  12. Co-Localization of Stroop and Syntactic Ambiguity Resolution in Broca's Area: Implications for the Neural Basis of Sentence Processing

    ERIC Educational Resources Information Center

    January, David; Trueswell, John C.; Thompson-Schill, Sharon L.

    2009-01-01

    For over a century, a link between left prefrontal cortex and language processing has been accepted, yet the precise characterization of this link remains elusive. Recent advances in both the study of sentence processing and the neuroscientific study of frontal lobe function suggest an intriguing possibility: The demands to resolve competition…

  13. Optimal Superpositioning of Flexible Molecule Ensembles

    PubMed Central

    Gapsys, Vytautas; de Groot, Bert L.

    2013-01-01

    Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072

  14. A review on the inter-frequency biases of GLONASS carrier-phase data

    NASA Astrophysics Data System (ADS)

    Geng, Jianghui; Zhao, Qile; Shi, Chuang; Liu, Jingnan

    2017-03-01

    GLONASS ambiguity resolution (AR) between inhomogeneous stations requires correction of inter-frequency phase biases (IFPBs) (a "station" here is an integral ensemble of a receiver, an antenna, firmware, etc.). It has been elucidated that IFPBs as a linear function of channel numbers are not physical in nature, but actually originate in differential code-phase biases (DCPBs). Although IFPBs have been prevalently recognized, an unanswered question is whether IFPBs and DCPBs are equivalent in enabling GLONASS AR. Besides, general strategies for the DCPB estimation across a large network of heterogeneous stations are still under investigation within the GNSS community, such as whether one DCPB per receiver type (rather than individual stations) suffices, as tentatively suggested by the IGS (International GNSS Service), and what accuracy we are able to and ought to achieve for DCPB products. In this study, we review the concept of DCPBs and point out that IFPBs are only approximate derivations from DCPBs, and are potentially problematic if carrier-phase hardware biases differ by up to several millimeters across frequency channels. We further stress the station and observable specific properties of DCPBs which cannot be thoughtlessly ignored as conducted conventionally. With 212 days of data from 200 European stations, we estimated DCPBs per stations by resolving ionosphere-free ambiguities of ˜ 5.3 cm wavelengths, and compared them to the presumed truth benchmarks computed directly with L1 and L2 data on ultra-short baselines. On average, the accuracy of our DCPB products is around 0.7 ns in RMS. According to this uncertainty estimates, we could unambiguously confirm that DCPBs can typically differ substantially by up to 30 ns among receivers of identical types and over 10 ns across different observables. In contrast, a DCPB error of more than 6 ns will decrease the fixing rate of ionosphere-free ambiguities by over 20 %, due to their smallest frequency spacing and highest sensitivity to DCPB errors. Therefore, we suggest that (1) the rigorous DCPB model should be implemented instead of the classic, but inaccurate IFPB model; (2) DCPBs of sub-ns accuracy can be achieved over a large network by efficiently resolving ionosphere-free ambiguities; (3) DCPBs should be estimated and applied on account of their station and observable specific properties, especially for ambiguities of short wavelengths.

  15. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  16. A neural network approach for the blind deconvolution of turbulent flows

    NASA Astrophysics Data System (ADS)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  17. Crowded field photometry with deconvolved images.

    NASA Astrophysics Data System (ADS)

    Linde, P.; Spännare, S.

    A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.

  18. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  19. The structure of the inner arcsecond of R Aquarii observed with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Burgarella, Denis; Paresce, Francesco

    1992-01-01

    The inner arcsec of R Aquarii has been observed with the Faint Object Camera on the Hubble Space Telescope. A simple and reliable linear deconvolution method is used to resolve the two features, designated C1 and C2 from radio observations, into several condensations. C1 is composed of four objects, designated C1a, C1b located at 0.099 arcsec from C1a, C3 at 0.162 arcsec from C1a, and C4 at 0.137 arcsec from C1a. The source C3, detected at 2 cm in the radio and in H-alpha, might be the V = 6-11 Mira variable. The nature of feature C4 is still unknown. Features C1a and C1b have not been resolved by another instrument, and it might be possible that the hot star is one of the two or a nearby nondetected object.

  20. X-ray vision of fuel sprays.

    PubMed

    Wang, Jin

    2005-03-01

    With brilliant synchrotron X-ray sources, microsecond time-resolved synchrotron X-ray radiography and tomography have been used to elucidate the detailed three-dimensional structure and dynamics of high-pressure high-speed fuel sprays in the near-nozzle region. The measurement allows quantitative determination of the fuel distribution in the optically impenetrable region owing to the multiple scattering of visible light by small atomized fuel droplets surrounding the jet. X-radiographs of the jet-induced shock waves prove that the fuel jets become supersonic under appropriate injection conditions and that the quantitative analysis of the thermodynamic properties of the shock waves can also be derived from the most direct measurement. In other situations where extremely axial-asymmetric sprays are encountered, mass deconvolution and cross-sectional fuel distribution models can be computed based on the monochromatic and time-resolved X-radiographic images collected from various rotational orientations of the sprays. Such quantitative analysis reveals the never-before-reported characteristics and most detailed near-nozzle mass distribution of highly transient fuel sprays.

  1. Correction for frequency-dependent hydrophone response to nonlinear pressure waves using complex deconvolution and rarefactional filtering: application with fiber optic hydrophones.

    PubMed

    Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R

    2015-01-01

    Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.

  2. Use of Referential Discourse Contexts in L2 Offline and Online Sentence Processing.

    PubMed

    Yang, Pi-Lan

    2016-10-01

    The present study aimed to investigate (a) the extent to which Chinese-speaking learners of English in Taiwan use referential noun phrase (NP) information contained in discourse contexts to complete ambiguous noun/verb fragments in a sentence completion task, and (b) whether and when they use the contexts to disambiguate main verb versus reduced relative clause (MV/RRC) ambiguities in real time. Results showed that unlike native English speakers, English learners did not create a marked increase in RRC completions in biasing two-NP-referent discourse contexts except for advanced learners. Nevertheless, like native speakers, the learners at elementary, intermediate, and advanced English proficiency levels all used the information in a later stage of resolving the MV/RRC ambiguities in real time. The delayed effect of referential context information observed suggests that L2 learners, like native speakers, are able to construct syntax-to-discourse mappings in real time. It also suggests that processing of syntactic information takes precedence over integration of syntactic information with discourse information during L1 and L2 online sentence processing.

  3. Resolving the cold debris disc around a planet-hosting star . PACS photometric imaging observations of q1 Eridani (HD 10647, HR 506)

    NASA Astrophysics Data System (ADS)

    Liseau, R.; Eiroa, C.; Fedele, D.; Augereau, J.-C.; Olofsson, G.; González, B.; Maldonado, J.; Montesinos, B.; Mora, A.; Absil, O.; Ardila, D.; Barrado, D.; Bayo, A.; Beichman, C. A.; Bryden, G.; Danchi, W. C.; Del Burgo, C.; Ertel, S.; Fridlund, C. W. M.; Heras, A. M.; Krivov, A. V.; Launhardt, R.; Lebreton, J.; Löhne, T.; Marshall, J. P.; Meeus, G.; Müller, S.; Pilbratt, G. L.; Roberge, A.; Rodmann, J.; Solano, E.; Stapelfeldt, K. R.; Thébault, Ph.; White, G. J.; Wolf, S.

    2010-07-01

    Context. About two dozen exo-solar debris systems have been spatially resolved. These debris discs commonly display a variety of structural features such as clumps, rings, belts, excentric distributions and spiral patterns. In most cases, these features are believed to be formed, shaped and maintained by the dynamical influence of planets orbiting the host stars. In very few cases has the presence of the dynamically important planet(s) been inferred from direct observation. Aims: The solar-type star q1 Eri is known to be surrounded by debris, extended on scales of ⪉30”. The star is also known to host at least one planet, albeit on an orbit far too small to make it responsible for structures at distances of tens to hundreds of AU. The aim of the present investigation is twofold: to determine the optical and material properties of the debris and to infer the spatial distribution of the dust, which may hint at the presence of additional planets. Methods: The Photodetector Array Camera and Spectrometer (PACS) aboard the Herschel Space Observatory allows imaging observations in the far infrared at unprecedented resolution, i.e. at better than 6” to 12” over the wavelength range of 60 μm to 210 μm. Together with the results from ground-based observations, these spatially resolved data can be modelled to determine the nature of the debris and its evolution more reliably than what would be possible from unresolved data alone. Results: For the first time has the q1 Eri disc been resolved at far infrared wavelengths. The PACS observations at 70 μm, 100 μm and 160 μm reveal an oval image showing a disc-like structure in all bands, the size of which increases with wavelength. Assuming a circular shape yields the inclination of its equatorial plane with respect to that of the sky, i > 53°. The results of image de-convolution indicate that i likely is larger than 63°, where 90° corresponds to an edge-on disc. Conclusions: The observed emission is thermal and optically thin. The resolved data are consistent with debris at temperatures below 30 K at radii larger than 120 AU. From image de-convolution, we find that q1 Eri is surrounded by an about 40 AU wide ring at the radial distance of ~85 AU. This is the first real Edgeworth-Kuiper Belt analogue ever observed. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  4. On the Effects of Modeling As-Manufactured Geometry: Toward Digital Twin

    NASA Technical Reports Server (NTRS)

    Cerrone, Albert; Hochhalter, Jacob; Heber, Gerd; Ingraffea, Anthony

    2014-01-01

    Asimple, nonstandardized material test specimen,which fails along one of two different likely crack paths, is considered herein.The result of deviations in geometry on the order of tenths of amillimeter, this ambiguity in crack pathmotivates the consideration of asmanufactured component geometry in the design, assessment, and certification of structural systems.Herein, finite elementmodels of as-manufactured specimens are generated and subsequently analyzed to resolve the crack-path ambiguity. The consequence and benefit of such a "personalized" methodology is the prediction of a crack path for each specimen based on its as-manufactured geometry, rather than a distribution of possible specimen geometries or nominal geometry.The consideration of as-manufactured characteristics is central to the Digital Twin concept. Therefore, this work is also intended to motivate its development.

  5. Assisted living: a place to manage uncertainty. The ambiguity of assisted living is unavoidable because residents' needs are always changing. The Wheat Valley example is used to examine this concept.

    PubMed

    Ekerdt, David J

    2005-01-01

    The assisted living environment lacks the satisfying clarity of the consumer model (a stay at the Holiday Inn) or the medical model (the hospital or nursing home). Yet the ambiguity of assisted living is unavoidable because it shelters individuals whose needs are changing, the model of care requires extensive negotiation with residents, and staff members must continually compromise as they implement the principles. Assisted living is a place where uncertainty is managed, not resolved. This indicates a need for the further pursuit of qualitative research, such as reported by these articles and others (e.g., Carder, 2002), to explore how participants construct, make sense of, and interpret their daily experience in assisted living.

  6. Ion/molecule reactions to chemically deconvolute the electrospray ionization mass spectra of synthetic polymers.

    PubMed

    Lennon, John D; Cole, Scott P; Glish, Gary L

    2006-12-15

    A new approach has been developed to analyze synthetic polymers via electrospray ionization mass spectrometry. Ion/molecule reactions, a unique feature of trapping instruments such as quadrupole ion trap mass spectrometers, can be used to chemically deconvolute the molecular mass distribution of polymers from the charge-state distribution generated by electrospray ionization. The reaction involves stripping charge from multiply charged oligomers to reduce the number of charge states. This reduces or eliminates the overlapping of oligomers from adjacent charge states. 15-Crown-5 was used to strip alkali cations (Na+) from several narrow polydisperse poly(ethylene glycol) standards. The charge-state distribution of each oligomer is reduced to primarily one charge state. Individual oligomers can be resolved, and the average molecular mass and polydispersities can be calculated for the polymers examined here. In most cases, the measured number-average molecular mass values are within 10% of the manufacturers' reported values obtained by gel permeation chromatography. The polydispersity was typically underestimated compared to values reported by the suppliers. Mn values were obtained with 0.5% RSD and are independent, over several orders of magnitude, of the polymer and cation concentration. The distributions that were obtained fit quite well to the Gaussian distribution indicating no high- or low-mass discriminations.

  7. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  8. Studies of Solar Helicity Using Vector Magnetograms

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.; Pevstov, Alexei A.

    1999-01-01

    observations of photospheric magnetic fields made with vector magnetographs have been used recently to study solar helicity. In this paper we indicate what can and cannot be derived from vector magnetograms, and point out some potential problems in these data that could affect the calculations of 'helicity'. Among these problems are magnetic saturation, Faraday rotation, low spectral resolution, and the method of resolving the ambiguity in the azimuth.

  9. Preliminary Geologic Map of the Big Pine Mountain Quadrangle, California

    USGS Publications Warehouse

    Vedder, J.G.; McLean, Hugh; Stanley, R.G.

    1995-01-01

    Reconnaissance geologic mapping of the San Rafael Primitive Area (now the San Rafael Wilderness) by Gower and others (1966) and Vedder an others (1967) showed s number of stratigraphic and structural ambiguities. To help resolve some of those problems, additional field work was done on parts of the Big Pine Moutain quadrangle during short intervals in 1981 and 1984, and 1990-1994.

  10. The Review on the Charge Distribution on the Conductor Surface

    ERIC Educational Resources Information Center

    Matehkolaee, M. Jafari; Asrami, A. Naderi

    2013-01-01

    In this paper we have a full review on the surface charge density at disordered conductor surfaces. Basically, reading text books does not resolve ambiguities in this field. As far as is possible, we have tried to the concepts easier to turn. In fact we will answer two questions. One of them is that why do charges tend to go where the curvature is…

  11. Neural Network and Letter Recognition.

    NASA Astrophysics Data System (ADS)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error rate, 4.9% is achieved for alphanumeric sets when 50 sets are trained. With the ambiguity resolver, it is reduced to 2.5%. In case that only numeral sets are trained and tested, 2.0% error rate is achieved. When only alphabet sets are considered, the error rate is reduced to 1.1%.

  12. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  13. Fixed point theorems of GPS carrier phase ambiguity resolution and their application to massive network processing: Ambizap

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey

    2008-12-01

    Precise point positioning (PPP) has become popular for Global Positioning System (GPS) geodetic network analysis because for n stations, PPP has O(n) processing time, yet solutions closely approximate those of O(n3) full network analysis. Subsequent carrier phase ambiguity resolution (AR) further improves PPP precision and accuracy; however, full-network bootstrapping AR algorithms are O(n4), limiting single network solutions to n < 100. In this contribution, fixed point theorems of AR are derived and then used to develop "Ambizap," an O(n) algorithm designed to give results that closely approximate full network AR. Ambizap has been tested to n ≈ 2800 and proves to be O(n) in this range, adding only ˜50% to PPP processing time. Tests show that a 98-station network is resolved on a 3-GHz CPU in 7 min, versus 22 h using O(n4) AR methods. Ambizap features a novel network adjustment filter, producing solutions that precisely match O(n4) full network analysis. The resulting coordinates agree to ≪1 mm with current AR methods, much smaller than the ˜3-mm RMS precision of PPP alone. A 2000-station global network can be ambiguity resolved in ˜2.5 h. Together with PPP, Ambizap enables rapid, multiple reanalysis of large networks (e.g., ˜1000-station EarthScope Plate Boundary Observatory) and facilitates the addition of extra stations to an existing network solution without need to reprocess all data. To meet future needs, PPP plus Ambizap is designed to handle ˜10,000 stations per day on a 3-GHz dual-CPU desktop PC.

  14. Single-Receiver GPS Phase Bias Resolution

    NASA Technical Reports Server (NTRS)

    Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.

    2010-01-01

    Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp

  15. The importance of role sending in the sensemaking of change agent roles.

    PubMed

    Tucker, Danielle A; Hendy, Jane; Barlow, James

    2015-01-01

    The purpose of this paper is to investigate what happens when a lack of role-sending results in ambiguous change agent roles during a large scale organisational reconfiguration. The authors consider the role of sensemaking in resolving role ambiguity of middle manager change agents and the consequences of this for organisational restructuring. Data were collected from a case study analysis of significant organisational reconfiguration across a local National Health Service Trust in the UK. Data consists of 82 interviews, complemented by analysis of over 100 documents and field notes from 51 hours of observations collected over five phases covering a three year period before, during and after the reconfiguration. An inductive qualitative analysis revealed the sensemaking processes by which ambiguity in role definition was resolved. The data explains how change agents collectively make sense of a role in their own way, drawing on their own experiences and views as well as cues from other organisational members. The authors also identified the organisational outcomes which resulted from this freedom in sensemaking. This study demonstrates that by leaving too much flexibility in the definition of the role, agents developed their own sensemaking which was subsequently very difficult to manipulate. In creating new roles, management first needs to have a realistic vision of the task and roles that their agents will perform, and second, to communicate these expectations to both those responsible for recruiting these roles and to the agents themselves. Much of the focus in sensemaking research has been on the importance of change agents' sensemaking of the change but there has been little focus on how change agents sensemake their own role in the change.

  16. The room temperature crystal structure of a bacterial phytochrome determined by serial femtosecond crystallography

    DOE PAGES

    Edlund, Petra; Takala, Heikki; Claesson, Elin; ...

    2016-10-19

    Phytochromes are a family of photoreceptors that control light responses of plants, fungi and bacteria. A sequence of structural changes, which is not yet fully understood, leads to activation of an output domain. Time-resolved serial femtosecond crystallography (SFX) can potentially shine light on these conformational changes. Here we report the room temperature crystal structure of the chromophore-binding domains of the Deinococcus radiodurans phytochrome at 2.1 Å resolution. The structure was obtained by serial femtosecond X-ray crystallography from microcrystals at an X-ray free electron laser. We find overall good agreement compared to a crystal structure at 1.35 Å resolution derived frommore » conventional crystallography at cryogenic temperatures, which we also report here. The thioether linkage between chromophore and protein is subject to positional ambiguity at the synchrotron, but is fully resolved with SFX. As a result, the study paves the way for time-resolved structural investigations of the phytochrome photocycle with time-resolved SFX.« less

  17. The room temperature crystal structure of a bacterial phytochrome determined by serial femtosecond crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edlund, Petra; Takala, Heikki; Claesson, Elin

    Phytochromes are a family of photoreceptors that control light responses of plants, fungi and bacteria. A sequence of structural changes, which is not yet fully understood, leads to activation of an output domain. Time-resolved serial femtosecond crystallography (SFX) can potentially shine light on these conformational changes. Here we report the room temperature crystal structure of the chromophore-binding domains of the Deinococcus radiodurans phytochrome at 2.1 Å resolution. The structure was obtained by serial femtosecond X-ray crystallography from microcrystals at an X-ray free electron laser. We find overall good agreement compared to a crystal structure at 1.35 Å resolution derived frommore » conventional crystallography at cryogenic temperatures, which we also report here. The thioether linkage between chromophore and protein is subject to positional ambiguity at the synchrotron, but is fully resolved with SFX. As a result, the study paves the way for time-resolved structural investigations of the phytochrome photocycle with time-resolved SFX.« less

  18. Quasi-Speckle Measurements of Close Double Stars With a CCD Camera

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2017-01-01

    CCD measurements of visual double stars have been an active area of amateur observing for several years now. However, most CCD measurements rely on “lucky imaging” (selecting a very small percentage of the best frames of a larger frame set so as to get the best “frozen” atmosphere for the image), a technique that has limitations with regards to how close the stars can be and still be cleanly resolved in the lucky image. In this paper, the author reports how using deconvolution stars in the analysis of close double stars can greatly enhance the quality of the autocorellogram, leading to a more precise solution using speckle reduction software rather than lucky imaging.

  19. Sizing up Asteroids at Lick Observatory with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Drummond, Jack D.; Christou, J.

    2006-12-01

    Using the Shane 3 meter telescope with adaptive optics at Lick Observatory, we have determined the triaxial dimensions and rotational poles of five asteroids, 3 Juno, 4 Vesta, 16 Psyche, 87 Sylvia, and 324 Bamberga. Parametric blind deconvolution was applied to images obtained mostly at 2.5 microns in 2004 and 2006. This is the first time Bamberga’s pole has been determined, and the results for the other four asteroids are in agreement with the analysis of decades of lightcurves by others. The techniques developed here to find sizes, shapes, and poles, in only one or two nights, can be applied to smaller asteroids that are resolved with larger telescopes.

  20. Resolving phase ambiguities in the calibration of redundant interferometric arrays: implications for array design

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy G.; Tarokh, Vahid; Rachlin, Yaron; Shah, Vinay N.; Ashcom, Jonathan B.

    2016-10-01

    We provide new results enabling robust interferometric image reconstruction in the presence of unknown aperture piston variation via the technique of redundant spacing calibration (RSC). The RSC technique uses redundant measurements of the same interferometric baseline with different pairs of apertures to reveal the piston variation among these pairs. In both optical and radio interferometry, the presence of phase-wrapping ambiguities in the measurements is a fundamental issue that needs to be addressed for reliable image reconstruction. In this paper, we show that these ambiguities affect recently developed RSC phasor-based reconstruction approaches operating on the complex visibilities, as well as traditional phase-based approaches operating on their logarithm. We also derive new sufficient conditions for an interferometric array to be immune to these ambiguities in the sense that their effect can be rendered benign in image reconstruction. This property, which we call wrap-invariance, has implications for the reliability of imaging via classical three-baseline phase closures as well as generalized closures. We show that wrap-invariance is conferred upon arrays whose interferometric graph satisfies a certain cycle-free condition. For cases in which this condition is not satisfied, a simple algorithm is provided for identifying those graph cycles which prevent its satisfaction. We apply this algorithm to diagnose and correct a member of a pattern family popular in the literature.

  1. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukumaran, M; Manigandan, D; Murali, V

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less

  2. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  3. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    NASA Astrophysics Data System (ADS)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  4. Online Registries for Researchers: Using ORCID and SciENcv.

    PubMed

    Vrabel, Mark

    2016-12-01

    The Open Researcher and Contributor ID (ORCID) registry helps resolve name ambiguity by assigning persistent unique identifiers that automatically link to a researcher's publications, grants, and other activities. This article provides an overview of ORCID and its benefits, citing several examples of its use in cancer and nursing journals. The article also briefly describes My NCBI and the Science Experts Network Curriculum Vitae (SciENcv) and its connection to ORCID.

  5. United States Airline Transport Pilot International Flight Language Experiences, Report 2: Word Meaning and Pronunciation

    DTIC Science & Technology

    2010-04-01

    different countries are understood; (4) Poor radios and transmission quality contribute to the unintelligibility of some controller transmissions; (5...going into a foreign country; (7) Differences associated with U.S. and ICAO phraseology need to be resolved and procedural ambiguities eliminated...affect you most related to differences in the word(s) used to describe a clearance, instruction, advisory, or request? Please list some examples

  6. Multi-Sensor Information Integration and Automatic Understanding

    DTIC Science & Technology

    2008-05-27

    distributions for target tracks and class which are utilized by an active learning cueing management framework to optimally task the appropriate sensor...modality to cued regions of interest. Moreover, this active learning approach also facilitates analyst cueing to help resolve track ambiguities in complex...scenes. We intend to leverage SIG’s active learning with analyst cueing under future efforts with ONR and other DoD agencies. Obtaining long- term

  7. Multi-Sensor Information Integration and Automatic Understanding

    DTIC Science & Technology

    2008-08-27

    distributions for target tracks and class which are utilized by an active learning cueing management framework to optimally task the appropriate sensor modality...to cued regions of interest. Moreover, this active learning approach also facilitates analyst cueing to help resolve track ambiguities in complex...scenes. We intend to leverage SIG’s active learning with analyst cueing under future efforts with ONR and other DoD agencies. Obtaining long- term

  8. Evaluating the Nature of So-Called S*-State Feature in Transient Absorption of Carotenoids in Light-Harvesting Complex 2 (LH2) from Purple Photosynthetic Bacteria.

    PubMed

    Niedzwiedzki, Dariusz M; Hunter, C Neil; Blankenship, Robert E

    2016-11-03

    Carotenoids are a class of natural pigments present in all phototrophic organisms, mainly in their light-harvesting proteins in which they play roles of accessory light absorbers and photoprotectors. Extensive time-resolved spectroscopic studies of these pigments have revealed unexpectedly complex photophysical properties, particularly for carotenoids in light-harvesting LH2 complexes from purple bacteria. An ambiguous, optically forbidden electronic excited state designated as S* has been postulated to be involved in carotenoid excitation relaxation and in an alternative carotenoid-to-bacteriochlorophyll energy transfer pathway, as well as being a precursor of the carotenoid triplet state. However, no definitive and satisfactory origin of the carotenoid S* state in these complexes has been established, despite a wide-ranging series of studies. Here, we resolve the ambiguous origin of the carotenoid S* state in LH2 complex from Rba. sphaeroides by showing that the S* feature can be seen as a combination of ground state absorption bleaching of the carotenoid pool converted to cations and the Stark spectrum of neighbor neutral carotenoids, induced by temporal electric field brought by the carotenoid cation-bacteriochlorophyll anion pair. These findings remove the need to assign an S* state, and thereby significantly simplify the photochemistry of carotenoids in these photosynthetic antenna complexes.

  9. What do you gain from deconvolution? - Observing faint galaxies with the Hubble Space Telescope Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Schade, David J.; Elson, Rebecca A. W.

    1993-01-01

    We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.

  10. Post-processing of adaptive optics images based on frame selection and multi-frame blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Changhui; Wei, Kai

    2008-07-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.

  11. Perceptual multistability in figure-ground segregation using motion stimuli.

    PubMed

    Gori, Simone; Giora, Enrico; Pedersini, Riccardo

    2008-11-01

    In a series of experiments using ambiguous stimuli, we investigate the effects of displaying ordered, discrete series of images on the dynamics of figure-ground segregation. For low frame presentation speeds, the series were perceived as a sequence of discontinuous, static images, while for high speeds they were perceived as continuous. We conclude that using stimuli varying continuously along one parameter results in stronger hysteresis and reduces spontaneous switching compared to matched static stimuli with discontinuous parameter changes. The additional evidence that the size of the hysteresis effects depended on trial duration is consistent with the stochastic nature of the dynamics governing figure-ground segregation. The results showed that for continuously changing stimuli, alternative figure-ground organizations are resolved via low-level, dynamical competition. A second series of experiments confirmed these results with an ambiguous stimulus based on Petter's effect.

  12. Fast Integer Ambiguity Resolution for GPS Attitude Determination

    NASA Technical Reports Server (NTRS)

    Lightsey, E. Glenn; Crassidis, John L.; Markley, F. Landis

    1999-01-01

    In this paper, a new algorithm for GPS (Global Positioning System) integer ambiguity resolution is shown. The algorithm first incorporates an instantaneous (static) integer search to significantly reduce the search space using a geometric inequality. Then a batch-type loss function is used to check the remaining integers in order to determine the optimal integer. This batch function represents the GPS sightline vectors in the body frame as the sum of two vectors, one depending on the phase measurements and the other on the unknown integers. The new algorithm has several advantages: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can resolve the integers even when coplanar baselines exist. The performance of the new algorithm is tested on a dynamic hardware simulator.

  13. Eye-fixation patterns of high- and low-span young and older adults: down the garden path and back again.

    PubMed

    Kemper, Susan; Crow, Angela; Kemtes, Karen

    2004-03-01

    Young and older adults' eye fixations were monitored as they read sentences with temporary ambiguities such as "The experienced soldiers warned about the dangers conducted the midnight raid." Their fixation patterns were similar except that older adults made many regressions. In a 2nd experiment, high- and low-span older adults were compared with high- and low-span young adults. Pint-pass fixations were similar, except low-span readers made many regressions and their total fixation times were longer. High-span readers also used the focus operator "only" (e.g., "Only experienced soldiers warned about the dangers.") to immediately resolve the temporary ambiguities. No age group differences were observed. These results are discussed with reference to theories of the role of working memory in sentence processing.

  14. Novel methods of time-resolved fluorescence data analysis for in-vivo tissue characterization: application to atherosclerosis.

    PubMed

    Jo, J A; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Dorafshar, A; Reil, T; Baker, D; Freischlag, J; Marcu, L

    2004-01-01

    This study investigates the ability of new analytical methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data to characterize tissue in-vivo, such as the composition of atherosclerotic vulnerable plaques. A total of 73 TR-LIFS measurements were taken in-vivo from the aorta of 8 rabbits, and subsequently analyzed using the Laguerre deconvolution technique. The investigated spots were classified as normal aorta, thin or thick lesions, and lesions rich in either collagen or macrophages/foam-cells. Different linear and nonlinear classification algorithms (linear discriminant analysis, stepwise linear discriminant analysis, principal component analysis, and feedforward neural networks) were developed using spectral and TR features (ratios of intensity values and Laguerre expansion coefficients, respectively). Normal intima and thin lesions were discriminated from thick lesions (sensitivity >90%, specificity 100%) using only spectral features. However, both spectral and time-resolved features were necessary to discriminate thick lesions rich in collagen from thick lesions rich in foam cells (sensitivity >85%, specificity >93%), and thin lesions rich in foam cells from normal aorta and thin lesions rich in collagen (sensitivity >85%, specificity >94%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for in-vivo tissue characterization.

  15. Liquid chromatography with diode array detection combined with spectral deconvolution for the analysis of some diterpene esters in Arabica coffee brew.

    PubMed

    Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda

    2015-02-01

    In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Cognitive Bias in Ambiguity Judgements: Using Computational Models to Dissect the Effects of Mild Mood Manipulation in Humans.

    PubMed

    Iigaya, Kiyohito; Jolivald, Aurelie; Jitkrittum, Wittawat; Gilchrist, Iain D; Dayan, Peter; Paul, Elizabeth; Mendl, Michael

    2016-01-01

    Positive and negative moods can be treated as prior expectations over future delivery of rewards and punishments. This provides an inferential foundation for the cognitive (judgement) bias task, now widely-used for assessing affective states in non-human animals. In the task, information about affect is extracted from the optimistic or pessimistic manner in which participants resolve ambiguities in sensory input. Here, we report a novel variant of the task aimed at dissecting the effects of affect manipulations on perceptual and value computations for decision-making under ambiguity in humans. Participants were instructed to judge which way a Gabor patch (250ms presentation) was leaning. If the stimulus leant one way (e.g. left), pressing the REWard key yielded a monetary WIN whilst pressing the SAFE key failed to acquire the WIN. If it leant the other way (e.g. right), pressing the SAFE key avoided a LOSS whilst pressing the REWard key incurred the LOSS. The size (0-100 UK pence) of the offered WIN and threatened LOSS, and the ambiguity of the stimulus (vertical being completely ambiguous) were varied on a trial-by-trial basis, allowing us to investigate how decisions were affected by differing combinations of these factors. Half the subjects performed the task in a 'Pleasantly' decorated room and were given a gift (bag of sweets) prior to starting, whilst the other half were in a bare 'Unpleasant' room and were not given anything. Although these treatments had little effect on self-reported mood, they did lead to differences in decision-making. All subjects were risk averse under ambiguity, consistent with the notion of loss aversion. Analysis using a Bayesian decision model indicated that Unpleasant Room subjects were ('pessimistically') biased towards choosing the SAFE key under ambiguity, but also weighed WINS more heavily than LOSSes compared to Pleasant Room subjects. These apparently contradictory findings may be explained by the influence of affect on different processes underlying decision-making, and the task presented here offers opportunities for further dissecting such processes.

  17. Cognitive Bias in Ambiguity Judgements: Using Computational Models to Dissect the Effects of Mild Mood Manipulation in Humans

    PubMed Central

    Jitkrittum, Wittawat; Gilchrist, Iain D.; Dayan, Peter; Paul, Elizabeth

    2016-01-01

    Positive and negative moods can be treated as prior expectations over future delivery of rewards and punishments. This provides an inferential foundation for the cognitive (judgement) bias task, now widely-used for assessing affective states in non-human animals. In the task, information about affect is extracted from the optimistic or pessimistic manner in which participants resolve ambiguities in sensory input. Here, we report a novel variant of the task aimed at dissecting the effects of affect manipulations on perceptual and value computations for decision-making under ambiguity in humans. Participants were instructed to judge which way a Gabor patch (250ms presentation) was leaning. If the stimulus leant one way (e.g. left), pressing the REWard key yielded a monetary WIN whilst pressing the SAFE key failed to acquire the WIN. If it leant the other way (e.g. right), pressing the SAFE key avoided a LOSS whilst pressing the REWard key incurred the LOSS. The size (0–100 UK pence) of the offered WIN and threatened LOSS, and the ambiguity of the stimulus (vertical being completely ambiguous) were varied on a trial-by-trial basis, allowing us to investigate how decisions were affected by differing combinations of these factors. Half the subjects performed the task in a ‘Pleasantly’ decorated room and were given a gift (bag of sweets) prior to starting, whilst the other half were in a bare ‘Unpleasant’ room and were not given anything. Although these treatments had little effect on self-reported mood, they did lead to differences in decision-making. All subjects were risk averse under ambiguity, consistent with the notion of loss aversion. Analysis using a Bayesian decision model indicated that Unpleasant Room subjects were (‘pessimistically’) biased towards choosing the SAFE key under ambiguity, but also weighed WINS more heavily than LOSSes compared to Pleasant Room subjects. These apparently contradictory findings may be explained by the influence of affect on different processes underlying decision-making, and the task presented here offers opportunities for further dissecting such processes. PMID:27829041

  18. The use of x-ray pulsar-based navigation method for interplanetary flight

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Guo, Xingcan; Yang, Yong

    2009-07-01

    As interplanetary missions are increasingly complex, the existing unique mature interplanetary navigation method mainly based on radiometric tracking techniques of Deep Space Network can not meet the rising demands of autonomous real-time navigation. This paper studied the applications for interplanetary flights of a new navigation technology under rapid development-the X-ray pulsar-based navigation for spacecraft (XPNAV), and valued its performance with a computer simulation. The XPNAV is an excellent autonomous real-time navigation method, and can provide comprehensive navigation information, including position, velocity, attitude, attitude rate and time. In the paper the fundamental principles and time transformation of the XPNAV were analyzed, and then the Delta-correction XPNAV blending the vehicles' trajectory dynamics with the pulse time-of-arrival differences at nominal and estimated spacecraft locations within an Unscented Kalman Filter (UKF) was discussed with a background mission of Mars Pathfinder during the heliocentric transferring orbit. The XPNAV has an intractable problem of integer pulse phase cycle ambiguities similar to the GPS carrier phase navigation. This article innovatively proposed the non-ambiguity assumption approach based on an analysis of the search space array method to resolve pulse phase cycle ambiguities between the nominal position and estimated position of the spacecraft. The simulation results show that the search space array method are computationally intensive and require long processing time when the position errors are large, and the non-ambiguity assumption method can solve ambiguity problem quickly and reliably. It is deemed that autonomous real-time integrated navigation system of the XPNAV blending with DSN, celestial navigation, inertial navigation and so on will be the development direction of interplanetary flight navigation system in the future.

  19. Applying Semantic-based Probabilistic Context-Free Grammar to Medical Language Processing – A Preliminary Study on Parsing Medication Sentences

    PubMed Central

    Xu, Hua; AbdelRahman, Samir; Lu, Yanxin; Denny, Joshua C.; Doan, Son

    2011-01-01

    Semantic-based sublanguage grammars have been shown to be an efficient method for medical language processing. However, given the complexity of the medical domain, parsers using such grammars inevitably encounter ambiguous sentences, which could be interpreted by different groups of production rules and consequently result in two or more parse trees. One possible solution, which has not been extensively explored previously, is to augment productions in medical sublanguage grammars with probabilities to resolve the ambiguity. In this study, we associated probabilities with production rules in a semantic-based grammar for medication findings and evaluated its performance on reducing parsing ambiguity. Using the existing data set from 2009 i2b2 NLP (Natural Language Processing) challenge for medication extraction, we developed a semantic-based CFG (Context Free Grammar) for parsing medication sentences and manually created a Treebank of 4,564 medication sentences from discharge summaries. Using the Treebank, we derived a semantic-based PCFG (probabilistic Context Free Grammar) for parsing medication sentences. Our evaluation using a 10-fold cross validation showed that the PCFG parser dramatically improved parsing performance when compared to the CFG parser. PMID:21856440

  20. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  1. Phase imaging using shifted wavefront sensor images.

    PubMed

    Zhang, Zhengyun; Chen, Zhi; Rehman, Shakil; Barbastathis, George

    2014-11-01

    We propose a new approach to the complete retrieval of a coherent field (amplitude and phase) using the same hardware configuration as a Shack-Hartmann sensor but with two modifications: first, we add a transversally shifted measurement to resolve ambiguities in the measured phase; and second, we employ factored form descent (FFD), an inverse algorithm for coherence retrieval, with a hard rank constraint. We verified the proposed approach using both numerical simulations and experiments.

  2. Man-Machine Interface (MMI) Requirements Definition and Design Guidelines

    DTIC Science & Technology

    1981-02-01

    be provided to interrogate the user to resolve any input ambiguities resulting from hardware limitations; see Smith and Goodwin, 1971 . Reference...Smith, S. L. and Goodwin, N. C’. Alphabetic data v entry via the Touch-Tone pad: A comment. Human Factors, 1971 , 13(2), 189-190. 41 All~ 1.0 General (con...software designer. Reference: Miller, R. B. Response time in man-computer conversational transactions. In Proceedings of the AFIPS kall Joint Computer

  3. Physical and Mathematical Questions on Signal Processing in Multibase Phase Direction Finders

    NASA Astrophysics Data System (ADS)

    Denisov, V. P.; Dubinin, D. V.; Meshcheryakov, A. A.

    2018-02-01

    Questions on improving the accuracy of multiple-base phase direction finders by rejecting anomalously large errors in the process of resolving the measurement ambiguities are considered. A physical basis is derived and calculated relationships characterizing the efficiency of the proposed solutions are obtained. Results of a computer simulation of a three-base direction finder are analyzed, along with field measurements of a three-base direction finder along near-ground paths.

  4. Why are angles misperceived?

    PubMed Central

    Nundy, Surajit; Lotto, Beau; Coppola, David; Shimpi, Amita; Purves, Dale

    2000-01-01

    Although it has long been apparent that observers tend to overestimate the magnitude of acute angles and underestimate obtuse ones, there is no consensus about why such distortions are seen. Geometrical modeling combined with psychophysical testing of human subjects indicates that these misperceptions are the result of an empirical strategy that resolves the inherent ambiguity of angular stimuli by generating percepts of the past significance of the stimulus rather than the geometry of its retinal projection. PMID:10805814

  5. Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora

    DTIC Science & Technology

    2001-01-01

    monolingual dictionary - derived list of canonical roots would resolve ambiguity re- garding which is the appropriate target. � Many of the errors are...system and set of algorithms for automati- cally inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity...corpora has tended to focus on their use in translation model training for MT rather than on monolingual applications. One exception is bilin- gual parsing

  6. Optical Survey of the Tumble Rates of Retired GEO Satellites

    DTIC Science & Technology

    2014-09-01

    objects while the sun- satellite -observer geometry was most favorable; typically over a one- to two-hour period, repeated multiple times over the course of...modeling and simulation of the optical characteristics of the satellite can help to resolve ambigu- ities. This process was validated on spacecraft for... satellite -observer geometry was most favorable; typically over a one- to two-hour period, repeated multiple times over the course of weeks. By

  7. Gamma-Ray Simulated Spectrum Deconvolution of a LaBr₃ 1-in. x 1-in. Scintillator for Nondestructive ATR Fuel Burnup On-Site Predictions

    DOE PAGES

    Navarro, Jorge; Ring, Terry A.; Nigg, David W.

    2015-03-01

    A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less

  8. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    NASA Astrophysics Data System (ADS)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  9. Profiling of Histone Post-Translational Modifications in Mouse Brain with High-Resolution Top-Down Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Mowei; Paša-Tolić, Ljiljana; Stenoien, David L.

    Histones play central roles in most chromosomal functions and both their basic biology and roles in disease have been the subject of intense study. Since multiple PTMs along the entire protein sequence are potential regulators of histones, a top-down approach, where intact proteins are analyzed, is ultimately required for complete characterization of proteoforms. However, significant challenges remain for top-down histone analysis primarily because of deficiencies in separation/resolving power and effective identification algorithms. Here, we used state of the art mass spectrometry and a bioinformatics workflow for targeted data analysis and visualization. The workflow uses ProMex for intact mass deconvolution, MSPathFindermore » as search engine, and LcMsSpectator as a data visualization tool. ProMex sums across retention time to maximize sensitivity and accuracy for low abundance species in MS1deconvolution. MSPathFinder searches the MS2 data against protein sequence databases with user-defined modifications. LcMsSpectator presents the results from ProMex and MSPathFinder in a format that allows quick manual evaluation of critical attributes for high-confidence identifications. When complemented with the open-modification tool TopPIC, this workflow enabled identification of novel histone PTMs including tyrosine bromination on histone H4 and H2A, H3 glutathionylation, and mapping of conventional PTMs along the entire protein for many histone subunits.« less

  10. Terahertz imaging for subsurface investigation of art paintings

    NASA Astrophysics Data System (ADS)

    Locquet, A.; Dong, J.; Melis, M.; Citrin, D. S.

    2017-08-01

    Terahertz (THz) reflective imaging is applied to the stratigraphic and subsurface investigation of oil paintings, with a focus on the mid-20th century Italian painting, `After Fishing', by Ausonio Tanda. THz frequency-wavelet domain deconvolution, which is an enhanced deconvolution technique combining frequency-domain filtering and stationary wavelet shrinkage, is utilized to resolve the optically thin paint layers or brush strokes. Based on the deconvolved terahertz data, the stratigraphy of the painting including the paint layers is reconstructed and subsurface features are clearly revealed. Specifically, THz C-scans and B-scans are analyzed based on different types of deconvolved signals to investigate the subsurface features of the painting, including the identification of regions with more than one paint layer, the refractive-index difference between paint layers, and the distribution of the paint-layer thickness. In addition, THz images are compared with X-ray images. The THz image of the thickness distribution of the paint exhibits a high degree of correlation with the X-ray transmission image, but THz images also reveal defects in the paperboard that cannot be identified in the X-ray image. Therefore, our results demonstrate that THz imaging can be considered as an effective tool for the stratigraphic and subsurface investigation of art paintings. They also open up the way for the use of non-ionizing THz imaging as a potential substitute for ionizing X-ray analysis in nondestructive evaluation of art paintings.

  11. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  12. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography).

    PubMed

    Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary

    2015-02-07

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.

  13. Iterative and function-continuation Fourier deconvolution methods for enhancing mass spectrometer resolution

    NASA Technical Reports Server (NTRS)

    Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.

    1984-01-01

    Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.

  14. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    PubMed Central

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  15. Interpretation biases in social anxiety: response generation, response selection, and self-appraisals.

    PubMed

    Huppert, Jonathan D; Pasupuleti, Radhika V; Foa, Edna B; Mathews, Andrew

    2007-07-01

    Cognitive theories propose that the resolution of ambiguity is related to the maintenance of social anxiety. A sentence completion task was used to examine how individuals high (n=26) and low (n=23) in social anxiety resolve ambiguous social sentences. Individuals were asked to generate as many responses as came to mind for each sentence, and then to endorse the response that best completes the sentence. Total responses, first responses, and endorsed responses were examined separately. Results indicated that high anxious individuals had more negative and anxious responses and fewer positive and neutral responses than low anxious individuals on all sentence completion measures. In contrast, a self-report measure of interpretation bias indicated that more of negative and anxious appraisals were related to social anxiety, while positive and neutral appraisals were not. Results are discussed in terms of a multi-stage processing model of interpretation biases.

  16. It’s the Thought That Counts

    PubMed Central

    DeWall, C. Nathan; Twenge, Jean M.; Gitter, Seth A.; Baumeister, Roy F.

    2008-01-01

    Prior research has confirmed a casual path between social rejection and aggression, but there has been no clear explanation of why social rejection causes aggression. A series of experiments tested the hypothesis that social exclusion increases the inclination to perceive neutral information as hostile, which has implications for aggression. Compared to accepted and control participants, socially excluded participants were more likely to rate aggressive and ambiguous words as similar (Experiment 1a), to complete word fragments with aggressive words (Experiment 1b), and to rate the ambiguous actions of another person as hostile (Experiments 2-4). This hostile cognitive bias among excluded people was related to their aggressive treatment of others who were not involved in the exclusion experience (Experiments 2 and 3), and others with whom participants had no previous contact (Experiment 4). These findings provide a first step in resolving the mystery of why social exclusion produces aggression. PMID:19210063

  17. Carrier phase ambiguity resolution for the Global Positioning System applied to geodetic baselines up to 2000 km

    NASA Technical Reports Server (NTRS)

    Blewitt, Geoffrey

    1989-01-01

    A technique for resolving the ambiguities in the GPS carrier phase data (which are biased by an integer number of cycles) is described which can be applied to geodetic baselines up to 2000 km in length and can be used with dual-frequency P code receivers. The results of such application demonstrated that a factor of 3 improvement in baseline accuracy could be obtained, giving centimeter-level agreement with coordinates inferred by very-long-baseline interferometry in the western United States. It was found that a method using pseudorange data is more reliable than one using ionospheric constraints for baselines longer than 200 km. It is recommended that future GPS networks have a wide spectrum of baseline lengths (ranging from baselines shorter than 100 km to those longer than 1000 km) and that GPS receivers be used which can acquire dual-frequency P code data.

  18. On the role of selective attention in visual perception

    PubMed Central

    Luck, Steven J.; Ford, Michelle A.

    1998-01-01

    What is the role of selective attention in visual perception? Before answering this question, it is necessary to differentiate between attentional mechanisms that influence the identification of a stimulus from those that operate after perception is complete. Cognitive neuroscience techniques are particularly well suited to making this distinction because they allow different attentional mechanisms to be isolated in terms of timing and/or neuroanatomy. The present article describes the use of these techniques in differentiating between perceptual and postperceptual attentional mechanisms and then proposes a specific role of attention in visual perception. Specifically, attention is proposed to resolve ambiguities in neural coding that arise when multiple objects are processed simultaneously. Evidence for this hypothesis is provided by two experiments showing that attention—as measured electrophysiologically—is allocated to visual search targets only under conditions that would be expected to lead to ambiguous neural coding. PMID:9448247

  19. Seeing ahead: experience and language in spatial perspective.

    PubMed

    Alloway, Tracy Packiam; Corley, Martin; Ramscar, Michael

    2006-03-01

    Spatial perspective can be directed by various reference frames, as well as by the direction of motion. In the present study, we explored how ambiguity in spatial tasks can be resolved. Participants were presented with virtual reality environments in order to stimulate a spatialreference frame based on motion. They interacted with an ego-moving spatial system in Experiment 1 and an object-moving spatial system in Experiment 2. While interacting with the virtual environment, the participants were presented with either a question representing a motion system different from that of the virtual environment or a nonspatial question relating to physical features of the virtual environment. They then performed the target task assign the label front in an ambiguous spatial task. The findings indicate that the disambiguation of spatial terms can be influenced by embodied experiences, as represented by the virtual environment, as well as by linguistic context.

  20. Using deconvolution to improve the metrological performance of the grid method

    NASA Astrophysics Data System (ADS)

    Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis

    2013-06-01

    The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.

  1. Least-squares (LS) deconvolution of a series of overlapping cortical auditory evoked potentials: a simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert

    2014-08-01

    Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.

  2. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  3. Laboratory Experiments to Simulate and Investigate the Physics Underlying the Dynamics of Merging Solar Corona Structures

    DTIC Science & Technology

    2016-06-05

    have attended and made presen- tations at the annual APS Division of Plasma Physics Meeting, the bi-annual High Energy Laboratory Astrophysics meeting...the AFOSR Space Science Pro- gram Review, the SHINE solar physics meeting, the International Astrophysics Conference, and the workshop “Complex plasma...tor k and Resolving Space-time Ambiguity. GR-Space Physics . submitted. Bellan, P. M., Zhai, X., Chai, K. B., & Ha, B. N. 2015. Complex astrophysical

  4. Resolving Phase Ambiguities in the Calibration of Redundant Interferometric Arrays: Implications for Array Design

    DTIC Science & Technology

    2016-03-04

    summary of the linear algebra involved. As we have seen, the RSC process begins with the interferometric phase measurement β, which due to wrapping will...mentary Divisors) in Section 2 and the following defi- nition of the matrix determinant. This definition is given in many linear algebra texts (see...principle solve for a particular solution of this system by arbitrarily setting two object phases (whose spatial frequencies are not co- linear ) and one

  5. Evaluating the Nature of So-Called S*-State Feature in Transient Absorption of Carotenoids in Light-Harvesting Complex 2 (LH2) from Purple Photosynthetic Bacteria

    PubMed Central

    2016-01-01

    Carotenoids are a class of natural pigments present in all phototrophic organisms, mainly in their light-harvesting proteins in which they play roles of accessory light absorbers and photoprotectors. Extensive time-resolved spectroscopic studies of these pigments have revealed unexpectedly complex photophysical properties, particularly for carotenoids in light-harvesting LH2 complexes from purple bacteria. An ambiguous, optically forbidden electronic excited state designated as S* has been postulated to be involved in carotenoid excitation relaxation and in an alternative carotenoid-to-bacteriochlorophyll energy transfer pathway, as well as being a precursor of the carotenoid triplet state. However, no definitive and satisfactory origin of the carotenoid S* state in these complexes has been established, despite a wide-ranging series of studies. Here, we resolve the ambiguous origin of the carotenoid S* state in LH2 complex from Rba. sphaeroides by showing that the S* feature can be seen as a combination of ground state absorption bleaching of the carotenoid pool converted to cations and the Stark spectrum of neighbor neutral carotenoids, induced by temporal electric field brought by the carotenoid cation–bacteriochlorophyll anion pair. These findings remove the need to assign an S* state, and thereby significantly simplify the photochemistry of carotenoids in these photosynthetic antenna complexes. PMID:27726397

  6. Evaluating the nature of so-called S*-State feature in transient absorption of carotenoids in light-harvesting complex 2 (LH2) from purple photosynthetic bacteria

    DOE PAGES

    Niedzwiedzki, Dariusz M.; Hunter, C. Neil; Blankenship, Robert E.

    2016-10-11

    Carotenoids are a class of natural pigments present in all phototrophic organisms, mainly in their light-harvesting proteins in which they play roles of accessory light absorbers and photoprotectors. Extensive time-resolved spectroscopic studies of these pigments have revealed unexpectedly complex photophysical properties, particularly for carotenoids in light-harvesting LH2 complexes from purple bacteria. An ambiguous, optically forbidden electronic excited state designated as S* has been postulated to be involved in carotenoid excitation relaxation and in an alternative carotenoid-to-bacteriochlorophyll energy transfer pathway, as well as being a precursor of the carotenoid triplet state. However, no definitive and satisfactory origin of the carotenoid S*more » state in these complexes has been established, despite a wide-ranging series of studies. Here, we resolve the ambiguous origin of the carotenoid S* state in LH2 complex from Rba. sphaeroides by showing that the S* feature can be seen as a combination of ground state absorption bleaching of the carotenoid pool converted to cations and the Stark spectrum of neighbor neutral carotenoids, induced by temporal electric field brought by the carotenoid cation- bacteriochlorophyll anion pair. Lastly, these findings remove the need to assign an S* state, and thereby significantly simplify the photochemistry of carotenoids in these photosynthetic antenna complexes.« less

  7. Evaluating the nature of so-called S*-State feature in transient absorption of carotenoids in light-harvesting complex 2 (LH2) from purple photosynthetic bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedzwiedzki, Dariusz M.; Hunter, C. Neil; Blankenship, Robert E.

    Carotenoids are a class of natural pigments present in all phototrophic organisms, mainly in their light-harvesting proteins in which they play roles of accessory light absorbers and photoprotectors. Extensive time-resolved spectroscopic studies of these pigments have revealed unexpectedly complex photophysical properties, particularly for carotenoids in light-harvesting LH2 complexes from purple bacteria. An ambiguous, optically forbidden electronic excited state designated as S* has been postulated to be involved in carotenoid excitation relaxation and in an alternative carotenoid-to-bacteriochlorophyll energy transfer pathway, as well as being a precursor of the carotenoid triplet state. However, no definitive and satisfactory origin of the carotenoid S*more » state in these complexes has been established, despite a wide-ranging series of studies. Here, we resolve the ambiguous origin of the carotenoid S* state in LH2 complex from Rba. sphaeroides by showing that the S* feature can be seen as a combination of ground state absorption bleaching of the carotenoid pool converted to cations and the Stark spectrum of neighbor neutral carotenoids, induced by temporal electric field brought by the carotenoid cation- bacteriochlorophyll anion pair. Lastly, these findings remove the need to assign an S* state, and thereby significantly simplify the photochemistry of carotenoids in these photosynthetic antenna complexes.« less

  8. Reduction, analysis, and properties of electric current systems in solar active regions

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen; Demoulin, Pascal

    1995-01-01

    The specific attraction and, in large part, the significance of solar magnetograms lie in the fact that they give the most important data on the electric currents and the nonpotentiality of active regions. Using the vector magnetograms from the Marshall Space Flight Center (MSFC), we employ a unique technique in the area of data analysis for resolving the 180 deg ambiguity in order to calculate the spatial structure of the vertical electric current density. The 180 deg ambiguity is resolved by applying concepts from the nonlinear multivariable optimization theory. The technique is shown to be of particular importance in very nonpotential active regions. The characterization of the vertical electric current density for a set of vector magnetograms using this method then gives the spatial scale, locations, and magnitude of these current systems. The method, which employs an intermediate parametric function which covers the magnetogram and which defines the local `preferred' direction, minimizes a specific functional of the observed transverse magnetic field. The specific functional that is successful is the integral of the square of the vertical current density. We find that the vertical electric current densities have common characteristics for the extended bipolar (beta) (gamma) (delta)-regions studied. The largest current systems have j(sub z)'s which maximizes around 30 mA/sq m and have a linear decreasing distribution to a diameter of 30 Mn.

  9. Reduction, Analysis, and Properties of Electric Current Systems in Solar Active Regions

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen; Demoulin, Pascal

    1995-01-01

    The specific attraction and, in large part, the significance of solar vector magnetograms lie in the fact that they give the most important data on the electric currents and the nonpotentiality of active regions. Using the vector magnetograms from the Marshall Space Flight Center (MSFC), we employ a unique technique in the area of data analysis for resolving the 180 degree ambiguity in order to calculate the spatial structure of the vertical electric current density. The 180 degree ambiguity is resolved by applying concepts from the nonlinear multivariable optimization theory. The technique is shown to be of particular importance in very nonpotential active regions. The characterization of the vertical electric current density for a set of vector magnetograms using this method then gives the spatial scale, locations, and magnitude of these current systems. The method, which employs an intermediate parametric function which covers the magnetogram and which defines the local "preferred" direction, minimizes a specific functional of the observed transverse magnetic field. The specific functional that is successful is the integral of the square of the vertical current density. We find that the vertical electric current densities have common characteristics for the extended bipolar beta gamma delta-regions studied. The largest current systems have j(sub z)'s which maximizes around 30 mA per square meter and have a linear decreasing distribution to a diameter of 30 Mm.

  10. Minimum entropy deconvolution and blind equalisation

    NASA Technical Reports Server (NTRS)

    Satorius, E. H.; Mulligan, J. J.

    1992-01-01

    Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.

  11. Scalar flux modeling in turbulent flames using iterative deconvolution

    NASA Astrophysics Data System (ADS)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  12. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  13. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  14. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2010-05-01

    In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rameau, J. D.; Freutel, S.; Kemper, A. F.

    We report that in complex materials various interactions have important roles in determining electronic properties. Angle-resolved photoelectron spectroscopy (ARPES) is used to study these processes by resolving the complex single-particle self-energy and quantifying how quantum interactions modify bare electronic states. However, ambiguities in the measurement of the real part of the self-energy and an intrinsic inability to disentangle various contributions to the imaginary part of the self-energy can leave the implications of such measurements open to debate. Here we employ a combined theoretical and experimental treatment of femtosecond time-resolved ARPES (tr-ARPES) show how population dynamics measured using tr-ARPES can bemore » used to separate electron–boson interactions from electron–electron interactions. In conclusion, we demonstrate a quantitative analysis of a well-defined electron–boson interaction in the unoccupied spectrum of the cuprate Bi 2Sr 2CaCu 2O 8+x characterized by an excited population decay time that maps directly to a discrete component of the equilibrium self-energy not readily isolated by static ARPES experiments.« less

  16. A Herschel resolved debris disc around HD 105211

    NASA Astrophysics Data System (ADS)

    Hengst, S.; Marshall, J. P.; Horner, J.; Marsden, S. C.

    2017-07-01

    Debris discs are the dusty aftermath of planet formation processes around main-sequence stars. Analysis of these discs is often hampered by the absence of any meaningful constraint on the location and spatial extent of the disc around its host star. Multi-wavelength, resolved imaging ameliorates the degeneracies inherent in the modelling process, making such data indispensable in the interpretation of these systems. The Herschel Space Observatory observed HD 105211 (η Cru, HIP 59072) with its Photodetector Array Camera and Spectrometer (PACS) instrument in three far-infrared wavebands (70, 100 and 160 μm). Here we combine these data with ancillary photometry spanning optical to far-infrared wavelengths in order to determine the extent of the circumstellar disc. The spectral energy distribution and multi-wavelength resolved emission of the disc are simultaneously modelled using a radiative transfer and imaging codes. Analysis of the Herschel/PACS images reveals the presence of extended structure in all three PACS images. From a radiative transfer model we derive a disc extent of 87.0 ± 2.5 au, with an inclination of 70.7 ± 2.2° to the line of sight and a position angle of 30.1 ± 0.5°. Deconvolution of the Herschel images reveals a potential asymmetry but this remains uncertain as a combined radiative transfer and image analysis replicates both the structure and the emission of the disc using a single axisymmetric annulus.

  17. Photoemission spectra and band structures of simple metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shung, K.W.; Mahan, G.D.

    1988-08-15

    We present a detailed calculation of the angle-resolved photoemission spectra of Na. The calculation follows a theory by Mahan, which allows for the inclusion of various bulk and surface effects. We find it important to take into account various broadening effects in order to explain the anomalous structure at E/sub F/, which was found by Jensen and Plummer in the spectra of Na. The broadening effects also help to resolve the discrepancy of the conduction-band width. Efforts are made to compare our results with new measurements of Plummer and Lyo. We discuss the ambiguity concerning the sign of the crystalmore » potential and comment on charge-density waves in the systems. We have also generalized our discussions to other simple metals like K.« less

  18. Commentary on Fiester's "Ill-placed democracy: ethics consultations and the moral status of voting".

    PubMed

    Dubler, Nancy Neveloff

    2011-01-01

    Autumn Fiester identifies an important element in clinical ethics consultation (CEC) that she labels, from the Greek, aporia, "state of perplexity," evidenced in CEC as ethical ambiguity. Fiester argues that the inherent difficulties of cases so characterized render them inappropriate for voting and more amenable to mediation and the search for consensus. This commentary supports Fiester's analysis and adds additional reasons for rejecting voting as a process for resolving disputes in CEC including: it distorts the analysis by empowering individual voters preferences and biases rather than focusing on the interests and wishes of the patient and family; it offers an insufficiently sensitive model for resolving the awesome, nuanced, conflicted, and ethically complex issues surrounding life and death; it marginalizes minority opinions that may have moral validity.

  19. System using leo satellites for centimeter-level navigation

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor); Parkinson, Bradford W. (Inventor); Cohen, Clark E. (Inventor); Lawrence, David G. (Inventor)

    2002-01-01

    Disclosed herein is a system for rapidly resolving position with centimeter-level accuracy for a mobile or stationary receiver [4]. This is achieved by estimating a set of parameters that are related to the integer cycle ambiguities which arise in tracking the carrier phase of satellite downlinks [5,6]. In the preferred embodiment, the technique involves a navigation receiver [4] simultaneously tracking transmissions [6] from Low Earth Orbit Satellites (LEOS) [2] together with transmissions [5] from GPS navigation satellites [1]. The rapid change in the line-of-sight vectors from the receiver [4] to the LEO signal sources [2], due to the orbital motion of the LEOS, enables the resolution with integrity of the integer cycle ambiguities of the GPS signals [5] as well as parameters related to the integer cycle ambiguity on the LEOS signals [6]. These parameters, once identified, enable real-time centimeter-level positioning of the receiver [4]. In order to achieve high-precision position estimates without the use of specialized electronics such as atomic clocks, the technique accounts for instabilities in the crystal oscillators driving the satellite transmitters, as well as those in the reference [3] and user [4] receivers. In addition, the algorithm accommodates as well as to LEOS that receive signals from ground-based transmitters, then re-transmit frequency-converted signals to the ground.

  20. The influence of the immediate visual context on incremental thematic role-assignment: evidence from eye-movements in depicted events.

    PubMed

    Knoeferle, Pia; Crocker, Matthew W; Scheepers, Christoph; Pickering, Martin J

    2005-02-01

    Studies monitoring eye-movements in scenes containing entities have provided robust evidence for incremental reference resolution processes. This paper addresses the less studied question of whether depicted event scenes can affect processes of incremental thematic role-assignment. In Experiments 1 and 2, participants inspected agent-action-patient events while listening to German verb-second sentences with initial structural and role ambiguity. The experiments investigated the time course with which listeners could resolve this ambiguity by relating the verb to the depicted events. Such verb-mediated visual event information allowed early disambiguation on-line, as evidenced by anticipatory eye-movements to the appropriate agent/patient role filler. We replicated this finding while investigating the effects of intonation. Experiment 3 demonstrated that when the verb was sentence-final and thus did not establish early reference to the depicted events, linguistic cues alone enabled disambiguation before people encountered the verb. Our results reveal the on-line influence of depicted events on incremental thematic role-assignment and disambiguation of local structural and role ambiguity. In consequence, our findings require a notion of reference that includes actions and events in addition to entities (e.g. Semantics and Cognition, 1983), and argue for a theory of on-line sentence comprehension that exploits a rich inventory of semantic categories.

  1. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  2. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  3. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography)

    PubMed Central

    Siegel, Nisan; Storrie, Brian; Bruce, Marc

    2016-01-01

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443

  4. Spectroscopic investigation on the energy transfer process in photosynthetic apparatus of cyanobacteria

    NASA Astrophysics Data System (ADS)

    Li, Ye; Wang, Bei; Ai, Xi-Cheng; Zhang, Xing-Kang; Zhao, Jing-Quan; Jiang, Li-Jin

    2004-06-01

    In this work, we employ cyanobacteria, Spirulina platensis, and separate their photosynthetic apparatus, phycobilisome (PBS), thylakoid membrane and phycobilisome-thylakoid membrane complex. The steady state absorption spectra, fluorescence spectra and corresponding deconvoluted spectra and picosecond time-resolved spectra are used to investigate the energy transfer process in phycobilisome-thylakoid membrane complex. The results on steady state spectra show chlorophylls of the photosystem II are able to transfer excitation energy to phycobilisome with Chl a molecules selectively excited. The decomposition of the steady state spectra further suggest the uphill energy transfer originate from chlorophylls of photosystem II to cores of phycobilisome, while rods and cores of phycobilisome cannot receive energy from the chlorophylls of photosystem I. The time constant for the back energy transfer process is 18 ps.

  5. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  6. Experimental feasibility of the airborne measurement of absolute oil fluorescence spectral conversion efficiency

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.; Swift, R. N.

    1983-01-01

    Airborne lidar oil spill experiments carried out to determine the practicability of the AOFSCE (absolute oil fluorescence spectral conversion efficiency) computational model are described. The results reveal that the model is suitable over a considerable range of oil film thicknesses provided the fluorescence efficiency of the oil does not approach the minimum detection sensitivity limitations of the lidar system. Separate airborne lidar experiments to demonstrate measurement of the water column Raman conversion efficiency are also conducted to ascertain the ultimate feasibility of converting such relative oil fluorescence to absolute values. Whereas the AOFSCE model is seen as highly promising, further airborne water column Raman conversion efficiency experiments with improved temporal or depth-resolved waveform calibration and software deconvolution techniques are thought necessary for a final determination of suitability.

  7. Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications.

    PubMed

    Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan

    2010-01-01

    It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes.

  8. Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications

    PubMed Central

    Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan

    2010-01-01

    It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes. PMID:19952117

  9. Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)

    1999-01-01

    A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.

  10. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  11. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  13. Dense deconvolution net: Multi path fusion and dense deconvolution for high resolution skin lesion segmentation.

    PubMed

    He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan

    2018-01-01

    Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.

  14. Resolving phase ambiguities in the calibration of redundant interferometric arrays: implications for array design

    DTIC Science & Technology

    2015-11-30

    matrix determinant. This definition is given in many linear algebra texts (see e.g. Bretscher (2001)). Definition 3.1 : Suppose we have an n-by-n...Processing, 2, 767 Blanchard P., Greenaway A., Anderton R., Appleby R., 1996, J. Opt. Soc. Am. A, 13, 1593 Bretscher O., 2001, Linear Algebra with...frequencies are not co- linear ) and one piston phase. This particular solution will then differ from the true solution by a phase ramp in the Fourier

  15. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  16. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  17. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    NASA Astrophysics Data System (ADS)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  18. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  19. Is There a Direct Correlation Between Microvascular Wall Structure and k-Trans Values Obtained From Perfusion CT Measurements in Lymphomas?

    PubMed

    Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher

    2018-05-03

    This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch

    DOE PAGES

    Vogman, G. V.; Shumlak, U.

    2011-10-13

    Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. As a result, these measurements are used to gain a better understanding of Z-pinch equilibria.« less

  1. Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogman, G. V.; Shumlak, U.

    2011-10-15

    Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. These measurements are used to gain a better understanding of Z-pinch equilibria.« less

  2. Data enhancement and analysis through mathematical deconvolution of signals from scientific measuring instruments

    NASA Technical Reports Server (NTRS)

    Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.

    1981-01-01

    Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.

  3. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  4. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  5. Improved deconvolution of very weak confocal signals.

    PubMed

    Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.

  6. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  7. Source Pulse Estimation of Mine Shock by Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Makowski, R.

    The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.

  8. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  9. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    DTIC Science & Technology

    2010-09-01

    12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial

  10. Monte Carlo modeling of time-resolved fluorescence for depth-selective interrogation of layered tissue.

    PubMed

    Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A

    2011-11-01

    Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.

  11. Faceting for direction-dependent spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.

    2018-04-01

    The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.

  12. The effect of wild card designations and rare alleles in forensic DNA database searches.

    PubMed

    Tvedebrink, Torben; Bright, Jo-Anne; Buckleton, John S; Curran, James M; Morling, Niels

    2015-05-01

    Forensic DNA databases are powerful tools used for the identification of persons of interest in criminal investigations. Typically, they consist of two parts: (1) a database containing DNA profiles of known individuals and (2) a database of DNA profiles associated with crime scenes. The risk of adventitious or chance matches between crimes and innocent people increases as the number of profiles within a database grows and more data is shared between various forensic DNA databases, e.g. from different jurisdictions. The DNA profiles obtained from crime scenes are often partial because crime samples may be compromised in quantity or quality. When an individual's profile cannot be resolved from a DNA mixture, ambiguity is introduced. A wild card, F, may be used in place of an allele that has dropped out or when an ambiguous profile is resolved from a DNA mixture. Variant alleles that do not correspond to any marker in the allelic ladder or appear above or below the extent of the allelic ladder range are assigned the allele designation R for rare allele. R alleles are position specific with respect to the observed/unambiguous allele. The F and R designations are made when the exact genotype has not been determined. The F and R designation are treated as wild cards for searching, which results in increased chance of adventitious matches. We investigated the probability of adventitious matches given these two types of wild cards. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Evaluation of GPS position and attitude determination for automated rendezvous and docking missions. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Diprinzio, Marc D.; Tolson, Robert H.

    1994-01-01

    The use of the Global Positioning System for position and attitude determination is evaluated for an automated rendezvous and docking mission. The typical mission scenario involves the chaser docking with the target for resupply or repair purposes, and is divided into three sections. During the homing phase, the chaser utilizes coarse acquisition pseudorange data to approach the target; guidance laws for this stage are investigated. In the second phase, differential carrier phase positioning is utilized. The chaser must maintain a quasiconstant distance from the target, in order to resolve the initial integer ambiguities. Once the ambiguities are determined, the terminal phase is entered, and the rendezvous is completed with continuous carrier phase tracking. Attitude knowledge is maintained in all phases through the use of the carrier phase observable. A Kalman filter is utilized to estimate all states from the noisy measurement data. The effects of selective availability and cycle slips are also investigated.

  14. Unambiguous Metabolite Identification in High-Throughput Metabolomics by Hybrid 1H-NMR/ESI-MS1 Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The invention improves accuracy of metabolite identification by combining direct infusion ESI-MS with one-dimensional 1H-NMR spectroscopy. First, we apply a standard 1H-NMR metabolite identification protocol by matching the chemical shift, J-coupling and intensity information of experimental NMR signals against the NMR signals of standard metabolites in a metabolomics reference libraries. This generates a list of candidate metabolites. The list contains both false positive and ambiguous identifications. The software tool (the invention) takes the list of candidate metabolites, generated from NMRbased metabolite identification, and then calculates, for each of the candidate metabolites, the monoisotopic mass-tocharge (m/z) ratios for each commonly observedmore » ion, fragment and adduct feature. These are then used to assign m/z ratios in experimental ESI-MS spectra of the same sample. Detection of the signals of a given metabolite in both NMR and MS spectra resolves the ambiguities, and therefore, significantly improves the confidence of the identification.« less

  15. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds.

    PubMed

    Altschuler, M D; Kassaee, A

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an 'NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use 'local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good 'global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  16. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds

    NASA Astrophysics Data System (ADS)

    Altschuler, Martin D.; Kassaee, Alireza

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  17. Symmetry relations in charmless B{yields}PPP decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gronau, Michael; Rosner, Jonathan L.; Enrico Fermi Institute and Department of Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, Illinois 60637

    2005-11-01

    Strangeness-changing decays of B mesons to three-body final states of pions and kaons are studied, assuming that they are dominated by a {delta}I=0 penguin amplitude with flavor structure b{yields}s. Numerous isospin relations for B{yields}K{pi}{pi} and for underlying quasi-two-body decays are compared successfully with experiment, in some cases resolving ambiguities in fitting resonance parameters. The only exception is a somewhat small branching ratio noted in B{sup 0}{yields}K*{sup 0}{pi}{sup 0}, interpreted in terms of destructive interference between a penguin amplitude and an enhanced electroweak penguin contribution. Relations for B decays into three kaons are derived in terms of final states involving K{submore » S} or K{sub L}, assuming that {phi}K-subtracted decay amplitudes are symmetric in K and K, as has been observed experimentally. Rates due to nonresonant backgrounds are studied using a simple model, which may reduce discrete ambiguities in Dalitz plot analyses.« less

  18. Under the umbrella

    PubMed Central

    Müller, Kai W.

    2017-01-01

    The inclusion of Internet Gaming Disorder as a preliminary diagnosis subsumed in Section III of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) has provoked mixed reactions. On the one hand, it has been appreciated as an important sign stressing the negative health-related impact of that disorder. Likewise, the definition of diagnostic criteria helps scientists and clinicians to refer to mandatory indicators associated with a health problem. On the other hand, it has been objected that this new diagnosis bears the danger of pathologizing normal behaviors that are a feature of healthy recreational activity for many people. However, the existence of diagnostic criteria is meant to avoid this danger. This emphasizes the necessity of being able to refer to as accurate defined criteria as possible. In its current version, the DSM criteria display not only strengths but also ambiguities. Both types will be discussed and necessary ideas to resolve those ambiguities will be presented for further research. PMID:28301966

  19. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  20. Interpretation of magnetotelluric measurements over an electrically dispersive one-dimensional earth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patella, D.

    1987-01-01

    Frequency dispersion of electromagnetic parameters of earth materials has been widely documented in recent years. It is claimed that magnetotellurics (MT)may be significantly affected by dispersion. This paper studies the MT plane-wave interpretative problem for a one-dimensional earth characterized by the presence of dispersive layers. The theoretical properties of the MT field under the dispersion hypothesis, and the main features of the dispersion phenomenon are synthetically reviewed. The examination of previously published MT curve responses over some models of dispersive earth section shows that ambiguity can arise when interpreting MT data with no other source of information. Thus it maybemore » almost impossible to distinguish between the response of a dispersive section and an equally probable dispersion-free section. The dispersion magnetotelluric (DMT) method is proposed as a means to resolve the ambiguity. The DMT method is based on the execution, at the same site, of an MT sounding and of an always dispersion-free dc geoelectric deep sounding.« less

  1. Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects.

    PubMed

    Sidorenko, Pavel; Kfir, Ofer; Shechtman, Yoav; Fleischer, Avner; Eldar, Yonina C; Segev, Mordechai; Cohen, Oren

    2015-09-08

    Phase-retrieval problems of one-dimensional (1D) signals are known to suffer from ambiguity that hampers their recovery from measurements of their Fourier magnitude, even when their support (a region that confines the signal) is known. Here we demonstrate sparsity-based coherent diffraction imaging of 1D objects using extreme-ultraviolet radiation produced from high harmonic generation. Using sparsity as prior information removes the ambiguity in many cases and enhances the resolution beyond the physical limit of the microscope. Our approach may be used in a variety of problems, such as diagnostics of defects in microelectronic chips. Importantly, this is the first demonstration of sparsity-based 1D phase retrieval from actual experiments, hence it paves the way for greatly improving the performance of Fourier-based measurement systems where 1D signals are inherent, such as diagnostics of ultrashort laser pulses, deciphering the complex time-dependent response functions (for example, time-dependent permittivity and permeability) from spectral measurements and vice versa.

  2. 4Pi microscopy deconvolution with a variable point-spread function.

    PubMed

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  3. Improved deconvolution of very weak confocal signals

    PubMed Central

    Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135

  4. Improved deconvolution of very weak confocal signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  5. Improved deconvolution of very weak confocal signals

    DOE PAGES

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...

    2017-06-06

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  6. Blind deconvolution post-processing of images corrected by adaptive optics

    NASA Astrophysics Data System (ADS)

    Christou, Julian C.

    1995-08-01

    Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.

  7. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    PubMed

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  8. Deconvolution of noisy transient signals: a Kalman filtering application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Zicker, J.E.

    The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.

  9. Perceptual memory drives learning of retinotopic biases for bistable stimuli.

    PubMed

    Murphy, Aidan P; Leopold, David A; Welchman, Andrew E

    2014-01-01

    The visual system exploits past experience at multiple timescales to resolve perceptual ambiguity in the retinal image. For example, perception of a bistable stimulus can be biased toward one interpretation over another when preceded by a brief presentation of a disambiguated version of the stimulus (positive priming) or through intermittent presentations of the ambiguous stimulus (stabilization). Similarly, prior presentations of unambiguous stimuli can be used to explicitly "train" a long-lasting association between a percept and a retinal location (perceptual association). These phenonema have typically been regarded as independent processes, with short-term biases attributed to perceptual memory and longer-term biases described as associative learning. Here we tested for interactions between these two forms of experience-dependent perceptual bias and demonstrate that short-term processes strongly influence long-term outcomes. We first demonstrate that the establishment of long-term perceptual contingencies does not require explicit training by unambiguous stimuli, but can arise spontaneously during the periodic presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we observed enduring, retinotopically specific perceptual biases that were expressed from the outset and remained stable for up to 40 min, consistent with the known phenomenon of perceptual stabilization. Further, bias was undiminished after a break period of 5 min, but was readily reset by interposed periods of continuous, as opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that perceptual biases can arise naturally and may principally reflect the brain's tendency to favor recent perceptual interpretation at a given retinal location. Further, they suggest that an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual organization.

  10. A neuronal network model for context-dependence of pitch change perception.

    PubMed

    Huang, Chengcheng; Englitz, Bernhard; Shamma, Shihab; Rinzel, John

    2015-01-01

    Many natural stimuli have perceptual ambiguities that can be cognitively resolved by the surrounding context. In audition, preceding context can bias the perception of speech and non-speech stimuli. Here, we develop a neuronal network model that can account for how context affects the perception of pitch change between a pair of successive complex tones. We focus especially on an ambiguous comparison-listeners experience opposite percepts (either ascending or descending) for an ambiguous tone pair depending on the spectral location of preceding context tones. We developed a recurrent, firing-rate network model, which detects frequency-change-direction of successively played stimuli and successfully accounts for the context-dependent perception demonstrated in behavioral experiments. The model consists of two tonotopically organized, excitatory populations, E up and E down, that respond preferentially to ascending or descending stimuli in pitch, respectively. These preferences are generated by an inhibitory population that provides inhibition asymmetric in frequency to the two populations; context dependence arises from slow facilitation of inhibition. We show that contextual influence depends on the spectral distribution of preceding tones and the tuning width of inhibitory neurons. Further, we demonstrate, using phase-space analysis, how the facilitated inhibition from previous stimuli and the waning inhibition from the just-preceding tone shape the competition between the E up and E down populations. In sum, our model accounts for contextual influences on the pitch change perception of an ambiguous tone pair by introducing a novel decoding strategy based on direction-selective units. The model's network architecture and slow facilitating inhibition emerge as predictions of neuronal mechanisms for these perceptual dynamics. Since the model structure does not depend on the specific stimuli, we show that it generalizes to other contextual effects and stimulus types.

  11. Vector magnetic field and vector current density in and around the δ-spot NOAA 10808†

    NASA Astrophysics Data System (ADS)

    Bommier, Véronique; Landi Degl'Innocenti, Egidio; Schmieder, Brigitte; Gelly, Bernard

    2011-08-01

    The context is that of the so-called ``fundamental ambiguity'' (also azimuth ambiguity, or 180° ambiguity) in magnetic field vector measurements: two field vectors symmetrical with respect to the line-of-sight have the same polarimetric signature, so that they cannot be discriminated. We propose a method to solve this ambiguity by applying the ``simulated annealing'' algorithm to the minimization of the field divergence, added to the longitudinal current absolute value, the line-of-sight derivative of the magnetic field being inferred by the interpretation of the Zeeman effect observed by spectropolarimetry in two lines formed at different depths. We find that the line pair Fe I λ 6301.5 and Fe I λ 6302.5 is appropriate for this purpose. We treat the example case of the δ-spot of NOAA 10808 observed on 13 September 2005 between 14:25 and 15:25 UT with the THEMIS telescope. Besides the magnetic field resolved map, the electric current density vector map is also obtained. A strong horizontal current density flow is found surrounding each spot inside its penumbra, associated to a non-zero Lorentz force centripetal with respect to the spot center (i.e., oriented towards the spot center). The current wrapping direction is found to depend on the spot polarity: clockwise for the positive polarity, counterclockwise for the negative one. This analysis is made possible thanks to the UNNOFIT2 Milne-Eddington inversion code, where the usual theory is generalized to the case of a line (Fe I λ 6301.5) that is not a normal Zeeman triplet line (like Fe I λ 6302.5).

  12. Ultra-fast HPM detectors improve NAD(P)H FLIM

    NASA Astrophysics Data System (ADS)

    Becker, Wolfgang; Wetzker, Cornelia; Benda, Aleš

    2018-02-01

    Metabolic imaging by NAD(P)H FLIM requires the decay functions in the individual pixels to be resolved into the decay components of bound and unbound NAD(P)H. Metabolic information is contained in the lifetime and relative amplitudes of the components. The separation of the decay components and the accuracy of the amplitudes and lifetimes improves substantially by using ultra-fast HPM-100-06 and HPM-100-07 hybrid detectors. The IRF width in combination with the Becker & Hickl SPC-150N and SPC-150NX TCSPC modules is less than 20 ps. An IRF this fast does not interfere with the fluorescence decay. The usual deconvolution process in the data analysis then virtually becomes a simple curve fitting, and the parameters of the NAD(P)H decay components are obtained at unprecedented accuracy.

  13. Phylogenetic resolution and habitat specificity of members of the Photobacterium phosphoreum species group.

    PubMed

    Ast, Jennifer C; Dunlap, Paul V

    2005-10-01

    Substantial ambiguity exists regarding the phylogenetic status of facultatively psychrophilic luminous bacteria identified as Photobacterium phosphoreum, a species thought to be widely distributed in the world's oceans and believed to be the specific bioluminescent light-organ symbiont of several deep-sea fishes. Members of the P. phosphoreum species group include luminous and non-luminous strains identified phenotypically from a variety of different habitats as well as phylogenetically defined lineages that appear to be evolutionarily distinct. To resolve this ambiguity and to begin developing a meaningful knowledge of the geographic distributions, habitats and symbiotic relationships of bacteria in the P. phosphoreum species group, we carried out a multilocus, fine-scale phylogenetic analysis based on sequences of the 16S rRNA, gyrB and luxABFE genes of many newly isolated luminous strains from symbiotic and saprophytic habitats, together with previously isolated luminous and non-luminous strains identified as P. phosphoreum from these and other habitats. Parsimony analysis unambiguously resolved three evolutionarily distinct clades, phosphoreum, iliopiscarium and kishitanii. The tight phylogenetic clustering within these clades and the distinct separation between them indicates they are different species, P. phosphoreum, Photobacterium iliopiscarium and the newly recognized 'Photobacterium kishitanii'. Previously reported non-luminous strains, which had been identified phenotypically as P. phosphoreum, resolved unambiguously as P. iliopiscarium, and all examined deep-sea fishes (specimens of families Chlorophthalmidae, Macrouridae, Moridae, Trachichthyidae and Acropomatidae) were found to harbour 'P. kishitanii', not P. phosphoreum, in their light organs. This resolution revealed also that 'P. kishitanii' is cosmopolitan in its geographic distribution. Furthermore, the lack of phylogenetic variation within 'P. kishitanii' indicates that this facultatively symbiotic bacterium is not cospeciating with its phylogenetically divergent host fishes. The results of this fine-scale phylogenetic analysis support the emerging view that bacterial species names should designate singular historical entities, i.e. discrete lineages diagnosed by a significant divergence of shared derived nucleotide characters.

  14. Range resolution improvement in passive bistatic radars using nested FM channels and least squares approach

    NASA Astrophysics Data System (ADS)

    Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.

    2015-05-01

    One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.

  15. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    NASA Astrophysics Data System (ADS)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  16. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  17. Helical filaments of human Dmc1 protein on single-stranded DNA: a cautionary tale

    PubMed Central

    Yu, Xiong; Egelman, Edward H.

    2010-01-01

    Proteins in the RecA/Rad51/RadA family form nucleoprotein filaments on DNA that catalyze a strand exchange reaction as part of homologous genetic recombination. Because of the centrality of this system to many aspects of DNA repair, the generation of genetic diversity, and cancer when this system fails or is not properly regulated, these filaments have been the object of many biochemical and biophysical studies. A recent paper has argued that the human Dmc1 protein, a meiotic homolog of bacterial RecA and human Rad51, forms filaments on single stranded DNA with ∼ 9 subunits per turn in contrast to the filaments formed on double stranded DNA with ∼ 6.4 subunits per turn, and that the stoichiometry of DNA binding is different between these two filaments. We show using scanning transmission electron microscopy (STEM) that the Dmc1 filament formed on single stranded DNA has a mass per unit length expected from ∼ 6.5 subunits per turn. More generally, we show how ambiguities in helical symmetry determination can generate incorrect solutions, and why one sometimes must use other techniques, such as biochemistry, metal shadowing, or STEM to resolve these ambiguities. While three-dimensional reconstruction of helical filaments from EM images is a powerful tool, the intrinsic ambiguities that may be present with limited resolution are not sufficiently appreciated. PMID:20600108

  18. Structural Priming and Frequency Effects Interact in Chinese Sentence Comprehension

    PubMed Central

    Wei, Hang; Dong, Yanping; Boland, Julie E.; Yuan, Fang

    2016-01-01

    Previous research in several European languages has shown that the language processing system is sensitive to both structural frequency and structural priming effects. However, it is currently not clear whether these two types of effects interact during online sentence comprehension, especially for languages that do not have morphological markings. To explore this issue, the present study investigated the possible interplay between structural priming and frequency effects for sentences containing the Chinese ambiguous construction V NP1 de NP2 in a self-paced reading experiment. The sentences were disambiguated to either the more frequent/preferred NP structure or the less frequent VP structure. Each target sentence was preceded by a prime sentence of three possible types: NP primes, VP primes, and neutral primes. When the ambiguous construction V NP1 de NP2 was disambiguated to the dispreferred VP structure, participants experienced more processing difficulty following an NP prime relative to following a VP prime or a neutral baseline. When the ambiguity was resolved to the preferred NP structure, prime type had no effect. These results suggest that structural priming in comprehension is modulated by the baseline frequency of alternative structures, with the less frequent structure being more subject to structural priming effects. These results are discussed in the context of the error-based, implicit learning account of structural priming. PMID:26869954

  19. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    PubMed Central

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  20. Gaze transfer in remote cooperation: is it always helpful to see what your partner is attending to?

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian; Velichkovsky, Boris M

    2013-01-01

    Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support.

  1. A predictive software tool for optimal timing in contrast enhanced carotid MR angiography

    NASA Astrophysics Data System (ADS)

    Moghaddam, Abbas N.; Balawi, Tariq; Habibi, Reza; Panknin, Christoph; Laub, Gerhard; Ruehm, Stefan; Finn, J. Paul

    2008-03-01

    A clear understanding of the first pass dynamics of contrast agents in the vascular system is crucial in synchronizing data acquisition of 3D MR angiography (MRA) with arrival of the contrast bolus in the vessels of interest. We implemented a computational model to simulate contrast dynamics in the vessels using the theory of linear time-invariant systems. The algorithm calculates a patient-specific impulse response for the contrast concentration from time-resolved images following a small test bolus injection. This is performed for a specific region of interest and through deconvolution of the intensity curve using the long division method. Since high spatial resolution 3D MRA is not time-resolved, the method was validated on time-resolved arterial contrast enhancement in Multi Slice CT angiography. For 20 patients, the timing of the contrast enhancement of the main bolus was predicted by our algorithm from the response to the test bolus, and then for each case the predicted time of maximum intensity was compared to the corresponding time in the actual scan which resulted in an acceptable agreement. Furthermore, as a qualitative validation, the algorithm's predictions of the timing of the carotid MRA in 20 patients with high quality MRA were correlated with the actual timing of those studies. We conclude that the above algorithm can be used as a practical clinical tool to eliminate guesswork and to replace empiric formulae by a priori computation of patient-specific timing of data acquisition for MR angiography.

  2. Kato expansion in quantum canonical perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolaev, Andrey, E-mail: Andrey.Nikolaev@rdtex.ru

    2016-06-15

    This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson’s ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.

  3. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  4. Olivine Composition of the Mars Trojan 5261 Eureka: Spitzer IRS Data

    NASA Technical Reports Server (NTRS)

    Lim, L. F.; Burt, B. J.; Emery, J. P.; Mueller, M.; Rivkin, A. S.; Trilling, D.

    2011-01-01

    The largest Mars trojan, 5261 Eureka, is one of two prototype "Sa" asteroids in the Bus-Demeo taxonomy. Analysis of its visible/near-IR spectrum led to the conclusion that it might represent either an angritic analog or an olivine-rich composition such as an R chondrite. Spitzer IRS data (5-30 micrometers) have enabled us to resolve this ambiguity. The thermal-IR spectrum exhibits strong olivine reststrahlen features consistent with a composition of approximately equals Fo60-70. Laboratory spectra of R chondrites, brachinites, and chassignites are dominated by similar features.

  5. Right/left assignment in drift chambers and proportional multiwire chambers (PWC's) using induced signals

    DOEpatents

    Walenta, Albert H.

    1979-01-01

    Improved multiwire chamber having means for resolving the left/right ambiguity in the location of an ionizing event. The chamber includes a plurality of spaced parallel anode wires positioned between spaced planar cathodes. Associated with each of the anode wires are a pair of localizing wires, one positioned on either side of the anode wire. The localizing wires are connected to a differential amplifier whose output polarity is determined by whether the ionizing event occurs to the right or left of the anode wire.

  6. Correlation mapping microscopy

    NASA Astrophysics Data System (ADS)

    McGrath, James; Alexandrov, Sergey; Owens, Peter; Subhash, Hrebesh M.; Leahy, Martin J.

    2015-03-01

    Changes in the microcirculation are associated with conditions such as Raynauds disease. Current modalities used to assess the microcirculation such as nailfold capillaroscopy are limited due to their depth ambiguity. A correlation mapping technique was recently developed to extend the capabilities of Optical Coherence Tomography to generate depth resolved images of the microcirculation. Here we present the extension of this technique to microscopy modalities, including confocal microscopy. It is shown that this correlation mapping microscopy technique can extend the capabilities of conventional microscopy to enable mapping of vascular networks in vivo with high spatial resolution.

  7. Absolute Distance Measurement with the MSTAR Sensor

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian

    2003-01-01

    The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.

  8. Energy dissipation from a correlated system driven out of equilibrium

    DOE PAGES

    Rameau, J. D.; Freutel, S.; Kemper, A. F.; ...

    2016-12-20

    We report that in complex materials various interactions have important roles in determining electronic properties. Angle-resolved photoelectron spectroscopy (ARPES) is used to study these processes by resolving the complex single-particle self-energy and quantifying how quantum interactions modify bare electronic states. However, ambiguities in the measurement of the real part of the self-energy and an intrinsic inability to disentangle various contributions to the imaginary part of the self-energy can leave the implications of such measurements open to debate. Here we employ a combined theoretical and experimental treatment of femtosecond time-resolved ARPES (tr-ARPES) show how population dynamics measured using tr-ARPES can bemore » used to separate electron–boson interactions from electron–electron interactions. In conclusion, we demonstrate a quantitative analysis of a well-defined electron–boson interaction in the unoccupied spectrum of the cuprate Bi 2Sr 2CaCu 2O 8+x characterized by an excited population decay time that maps directly to a discrete component of the equilibrium self-energy not readily isolated by static ARPES experiments.« less

  9. Application of deconvolution interferometry with both Hi-net and KiK-net data

    NASA Astrophysics Data System (ADS)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  10. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode/SP Data

    NASA Astrophysics Data System (ADS)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  11. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  12. Peptide de novo sequencing of mixture tandem mass spectra

    PubMed Central

    Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank

    2016-01-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701

  13. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  14. Deconvolution of gas chromatographic data

    NASA Technical Reports Server (NTRS)

    Howard, S.; Rayborn, G. H.

    1980-01-01

    The use of deconvolution methods on gas chromatographic data to obtain an accurate determination of the relative amounts of each material present by mathematically separating the merged peaks is discussed. Data were obtained on a gas chromatograph with a flame ionization detector. Chromatograms of five xylenes with differing degrees of separation were generated by varying the column temperature at selected rates. The merged peaks were then successfully separated by deconvolution. The concept of function continuation in the frequency domain was introduced in striving to reach the theoretical limit of accuracy, but proved to be only partially successful.

  15. Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona

    USGS Publications Warehouse

    Bultman, Mark W.

    2015-01-01

    Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.

  16. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  17. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  18. A frequency-domain seismic blind deconvolution based on Gini correlations

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing

    2018-02-01

    In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.

  19. Comprehensive analysis of yeast metabolite GC x GC-TOFMS data: combining discovery-mode and deconvolution chemometric software.

    PubMed

    Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E

    2007-08-01

    The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).

  20. Effect of 100 MeV swift Si8+ ions on structural and thermoluminescence properties of Y2O3:Dy3+nanophosphor

    NASA Astrophysics Data System (ADS)

    Shivaramu, N. J.; Lakshminarasappa, B. N.; Nagabhushana, K. R.; Singh, Fouran

    2016-05-01

    Nanoparticles of Y2O3:Dy3+ were prepared by the solution combustion method. The X-ray diffraction pattern of the 900°C annealed sample shows a cubic structure and the average crystallite size was found to be 31.49 nm. The field emission scanning electron microscopy image of the 900°C annealed sample shows well-separated spherical shape particles and the average particle size is found to be in a range 40 nm. Pellets of Y2O3:Dy3+ were irradiated with 100 MeV swift Si8+ ions for the fluence range of 3 × 1011_3 × 1013 ions cm-2. Pristine Y2O3:Dy3+ shows seven Raman modes with peaks at 129, 160, 330, 376, 434, 467 and 590 cm-1. The intensity of these modes decreases with an increase in ion fluence. A well-resolved thermoluminescence glow with peaks at ∼414 K (Tm1) and ∼614 K (Tm2) were observed in Si8+ ion-irradiated samples. It is found that glow peak intensity at 414 K increases with an increase in the dopant concentration up to 0.6 mol% and then decreases with an increase in dopant concentration. The high-temperature glow peak (614 K) intensity linearly increases with an increase in ion fluence. The broad TL glow curves were deconvoluted using the glow curve deconvoluted method and kinetic parameters were calculated using the general order kinetic equation.

  1. Multiple-step relayed correlation spectroscopy: sequential resonance assignments in oligosaccharides.

    PubMed Central

    Homans, S W; Dwek, R A; Fernandes, D L; Rademacher, T W

    1984-01-01

    A general property of the high-resolution proton NMR spectra of oligosaccharides is the appearance of low-field well-resolved resonances corresponding to the anomeric (H1) and H2 protons. The remaining skeletal protons resonate in the region 3-4 ppm, giving rise to an envelope of poorly resolved resonances. Assignments can be made from the H1 and H2 protons to their J-coupled neighbors (H2 and H3) within this main envelope by using 1H-1H correlated spectroscopy. However, the tight coupling (J congruent to delta) between further protons results in poor spectral dispersion with consequent assignment ambiguities. We describe here three-step two-dimensional relayed correlation spectroscopy and show how it can be used to correlate the resolved anomeric (H1) and H2 protons with remote (H4, H5) protons directly through a linear network of couplings using sequential magnetization transfer around the oligosaccharide rings. Resonance assignments are then obtained by inspection of cross-peaks that appear in well-resolved regions of the two-dimensional spectrum. This offers a general solution to the assignment problem in oligosaccharides and, importantly, these assignments will subsequently allow for the three-dimensional solution conformation to be determined by using one-dimensional and two-dimensional nuclear Overhauser experiments. PMID:6593701

  2. Passive ranging redundancy reduction in diurnal weather conditions

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Abbott, A. Lynn; Szu, Harold H.

    2013-05-01

    Ambiguity in binocular ranging (David Marr's paradox) may be resolved by using two eyes moving from side to side behind an optical bench while integrating multiple views. Moving a head from left to right with one eye closed can also help resolve the foreground and background range uncertainty. That empirical experiment implies redundancy in image data, which may be reduced by adopting a 3-D camera imaging model to perform compressive sensing. Here, the compressive sensing concept is examined from the perspective of redundancy reduction in images subject to diurnal and weather variations for the purpose of resolving range uncertainty at all weather conditions such as the dawn or dusk, the daytime with different light level or the nighttime at different spectral band. As an example, a scenario at an intersection of a country road at dawn/dusk is discussed where the location of the traffic signs needs to be resolved by passive ranging to answer whether it is located on the same side of the road or the opposite side, which is under the influence of temporal light/color level variation. A spectral band extrapolation via application of Lagrange Constrained Neural Network (LCNN) learning algorithm is discussed to address lost color restoration at dawn/dusk. A numerical simulation is illustrated along with the code example.

  3. On kinetics of a dynamically unbalanced rotator with sliding friction in supports

    NASA Astrophysics Data System (ADS)

    Chistyakov, Viktor V.

    2018-05-01

    The dynamics is analytically and numerically modelled for both free and forced rotations of a rigid body around the central but non-principal vertical axis Oz under action of dry friction forces in plain bearings and heel supports in combination with other dissipative and conservative axial torques. The inertia forces due to D'Alembert principle cause the supports' reactions and hence the decelerating friction torque depending on not only angular speed but acceleration too. This dependence makes the dynamical equations not resolved with regard to the senior derivative and ambiguous, and being thus resolved they have an irrational or singular right hand side. This irrationality/singularity results in their featured solutions or paradoxical absence of those in frames of absolutely rigid body approach. The kinetics obtained is analyzed and compared with the standard ones of rotation under action of conservative elastic and drag torques.

  4. Phylogenetic relationships among four new complete mitogenome sequences of Pelophylax (Amphibia: Anura) from the Balkans and Cyprus.

    PubMed

    Hofman, Sebastian; Pabijan, Maciej; Osikowski, Artur; Litvinchuk, Spartak N; Szymura, Jacek M

    2016-09-01

    We present the full-length mitogenome sequences of four European water frog species: Pelophylax cypriensis, P. epeiroticus, P. kurtmuelleri and P. shqipericus. The mtDNA size varied from 17,363 to 17,895 bp, and its organization with the LPTF tRNA gene cluster preceding the 12 S rRNA gene displayed the typical Neobatrachian arrangement. Maximum likelihood and Bayesian inference revealed a well-resolved mtDNA phylogeny of seven European Pelophylax species. The uncorrected p-distance for among Pelophylax mitogenomes was 9.6 (range 0.01-0.13). Most divergent was the P. shqipericus mitogenome, clustering with the "P. lessonae" group, in contrast to the other three new Pelophylax mitogenomes related to the "P. bedriagae/ridibundus" lineage. The new mitogenomes resolve ambiguities of the phylogenetic placement of P. cretensis and P. epeiroticus.

  5. Direct observation of Young’s double-slit interferences in vibrationally resolved photoionization of diatomic molecules

    PubMed Central

    Canton, Sophie E.; Plésiat, Etienne; Bozek, John D.; Rude, Bruce S.; Decleva, Piero; Martín, Fernando

    2011-01-01

    Vibrationally resolved valence-shell photoionization spectra of H2, N2 and CO have been measured in the photon energy range 20–300 eV using third-generation synchrotron radiation. Young’s double-slit interferences lead to oscillations in the corresponding vibrational ratios, showing that the molecules behave as two-center electron-wave emitters and that the associated interferences leave their trace in the angle-integrated photoionization cross section. In contrast to previous work, the oscillations are directly observable in the experiment, thereby removing any possible ambiguity related to the introduction of external parameters or fitting functions. A straightforward extension of an original idea proposed by Cohen and Fano [Cohen HD, Fano U (1966) Phys Rev 150:30] confirms this interpretation and shows that it is also valid for diatomic heteronuclear molecules. Results of accurate theoretical calculations are in excellent agreement with the experimental findings.

  6. Faint Object Camera observations of a globular cluster nova field

    NASA Technical Reports Server (NTRS)

    Margon, Bruce; Anderson, Scott F.; Downes, Ronald A.; Bohlin, Ralph C.; Jakobsen, Peter

    1991-01-01

    The Faint Object Camera onboard Hubble Space Telescope has obtained U and B images of the field of Nova Ophiuchi 1938 in the globular cluster M14 (NGC 6402). The candidate for the quiescent nova suggested by Shara et al. (1986) is clearly resolved into at least six separate images, probably all stellar, in a region of 0.5 arcsec. Although two of these objects are intriguing as they are somewhat ultraviolet, the actual nova counterpart remains ambiguous, as none of the images in the field has a marked UV excess. Many stars within the 1.4 arcsec (2 sigma) uncertainty of the nova outburst position are viable counterparts if only astrometric criteria are used for selection. The 11 x 11 arcsec frames easily resolve several hundred stars in modest exposures, implying that HST even in its current optical configuration will be unique for studies of very crowded fields at moderate (B = 22) limiting magnitudes.

  7. Processing strategy for water-gun seismic data from the Gulf of Mexico

    USGS Publications Warehouse

    Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.

    2000-01-01

    In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.

  8. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis.

    PubMed

    Carnevale Neto, Fausto; Pilon, Alan C; Selegato, Denise M; Freire, Rafael T; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  9. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis

    PubMed Central

    Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213

  10. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  11. The Recurrent Nova T CrB Did Not Erupt In 1842

    NASA Astrophysics Data System (ADS)

    Schaefer, Bradley E.

    2013-01-01

    The recurrent nova T CrB was one of the first well observed nova eruptions in 1866, and 80 years later it erupted again in 1946. Just after the 1866 eruption, Sir John Herschel reported to the Monthly Notices that he had seen the same star in his naked-eye charting of the sky on 1842 June 9, implying that there was a prior eruption 24 years earlier, with substantial implications for astrophysics. Unfortunately, the chart in the Monthly Notices was ambiguous and misleading, including whether the recorded position is or is not that of T CrB. So it has long been unclear whether T CrB did indeed have an eruption in 1842. To resolve this, I have made complete searches through the various archives with Herschel material, including the large collections at the Harry Ransom Center in Austin, the Royal Astronomical Society, the complete Herschel correspondence, and the Royal Society; plus three smaller archives as well as consulting with various Herschel experts. In one letter from 1866 to William Huggins, Herschel enclosed his own copy of his original observations, and with this all the ambiguities are resolved. It turns out that Herschel's indicated star was at the same position as a steady background star (BD+25 3020, V=7.06, G8V) and not that of T CrB, and Herschel regularly was seeing stars as faint as V=7.5 mag because he was using an opera glass. With this, there is no evidence for a T CrB eruption in 1842. Supported by the National Science Foundation.

  12. Likelihood-based molecular-replacement solution for a highly pathological crystal with tetartohedral twinning and sevenfold translational noncrystallographic symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sliwiak, Joanna; Jaskolski, Mariusz, E-mail: mariuszj@amu.edu.pl; A. Mickiewicz University, Grunwaldzka 6, 60-780 Poznan

    With the implementation of a molecular-replacement likelihood target that accounts for translational noncrystallographic symmetry, it became possible to solve the crystal structure of a protein with seven tetrameric assemblies arrayed translationally along the c axis. The new algorithm found 56 protein molecules in reduced symmetry (P1), which was used to resolve space-group ambiguity caused by severe twinning. Translational noncrystallographic symmetry (tNCS) is a pathology of protein crystals in which multiple copies of a molecule or assembly are found in similar orientations. Structure solution is problematic because this breaks the assumptions used in current likelihood-based methods. To cope with such cases,more » new likelihood approaches have been developed and implemented in Phaser to account for the statistical effects of tNCS in molecular replacement. Using these new approaches, it was possible to solve the crystal structure of a protein exhibiting an extreme form of this pathology with seven tetrameric assemblies arrayed along the c axis. To resolve space-group ambiguities caused by tetartohedral twinning, the structure was initially solved by placing 56 copies of the monomer in space group P1 and using the symmetry of the solution to define the true space group, C2. The resulting structure of Hyp-1, a pathogenesis-related class 10 (PR-10) protein from the medicinal herb St John’s wort, reveals the binding modes of the fluorescent probe 8-anilino-1-naphthalene sulfonate (ANS), providing insight into the function of the protein in binding or storing hydrophobic ligands.« less

  13. Assignment of protonation states in proteins and ligands: combining pKa prediction with hydrogen bonding network optimization.

    PubMed

    Krieger, Elmar; Dunbrack, Roland L; Hooft, Rob W W; Krieger, Barbara

    2012-01-01

    Among the many applications of molecular modeling, drug design is probably the one with the highest demands on the accuracy of the underlying structures. During lead optimization, the position of every atom in the binding site should ideally be known with high precision to identify those chemical modifications that are most likely to increase drug affinity. Unfortunately, X-ray crystallography at common resolution yields an electron density map that is too coarse, since the chemical elements and their protonation states cannot be fully resolved.This chapter describes the steps required to fill in the missing knowledge, by devising an algorithm that can detect and resolve the ambiguities. First, the pK (a) values of acidic and basic groups are predicted. Second, their potential protonation states are determined, including all permutations (considering for example protons that can jump between the oxygens of a phosphate group). Third, those groups of atoms are identified that can adopt alternative but indistinguishable conformations with essentially the same electron density. Fourth, potential hydrogen bond donors and acceptors are located. Finally, all these data are combined in a single "configuration energy function," whose global minimum is found with the SCWRL algorithm, which employs dead-end elimination and graph theory. As a result, one obtains a complete model of the protein and its bound ligand, with ambiguous groups rotated to the best orientation and with protonation states assigned considering the current pH and the H-bonding network. An implementation of the algorithm has been available since 2008 as part of the YASARA modeling & simulation program.

  14. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  15. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  16. A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.; Settle, G. L.; Knight, R. D.

    1975-01-01

    Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.

  17. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less

  18. Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.

    PubMed

    Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima

    2017-09-13

    One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.

  19. Peptide de novo sequencing of mixture tandem mass spectra.

    PubMed

    Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank

    2016-09-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Systems Proteomics for Translational Network Medicine

    PubMed Central

    Arrell, D. Kent; Terzic, Andre

    2012-01-01

    Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016

  1. Deconvolving the Nucleus of Centaurus A Using Chandra PSF Library

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita

    2000-01-01

    Centaurus A (NGC 5128) is a giant early-type galaxy containing the nearest (at 3.5 Mpc) radio-bright Active Galactic Nucleus (AGN). Cen A was observed with the High Resolution Camera (HRC) on the Chandra X-ray Observatory on several occasions since the launch in July 1999. The high-angular resolution (less than 0.5 arcsecond) Chandra/HRC images reveal X ray multi-scale structures in this object with unprecedented detail and clarity, including the bright nucleus believed to be associated with a supermassive black hole. We explored the spatial extent of the Cen A nucleus using deconvolution techniques on the full resolution Chandra images. Model point spread functions (PSFs) were derived from the standard Chandra raytrace PSF library as well as unresolved point sources observed with Chandra. The deconvolved images show that the Cen A nucleus is resolved and asymmetric. We discuss several possible causes of this extended emission and of the asymmetries.

  2. Properties of the 4.45 eV optical absorption band in LiF:Mg,Ti.

    PubMed

    Nail, I; Oster, L; Horowitz, Y S; Biderman, S; Belaish, Y

    2006-01-01

    The optical absorption (OA) and thermoluminescence (TL) of dosimetric LiF:Mg,Ti (TLD-100) as well as nominally pure LiF single crystal have been studied as a function of irradiation dose, thermal and optical bleaching in order to investigate the role of the 4.45 eV OA band in low temperature TL. Computerised deconvolution was used to resolve the absorption spectrum into individual gaussian bands and the TL glow curve into glow peaks. Although the 4.45 eV OA band shows thermal decay characteristics similar to the 4.0 eV band its dose filling constant and optical bleaching properties suggest that it cannot be associated with the TL of composite peaks 4 or 5. Its presence in optical grade single crystal LiF further suggests that it is an intrinsic defect or possibly associated with chance impurities other than Mg, Ti.

  3. POX 186: the ultracompact blue compact dwarf galaxy reveals its nature

    NASA Astrophysics Data System (ADS)

    Doublier, V.; Kunth, D.; Courbin, F.; Magain, P.

    2000-01-01

    High resolution, ground based R and I band observations of the ultra compact dwarf galaxy POX 186 are presented. The data, obtained with the ESO New Technology Telescope (NTT), are analyzed using a new deconvolution algorithm which allows one to resolve the innermost regions of this stellar-like object into three Super-Star Clusters (SSC). Upper limits to both masses (M ~ 105 Msun) and physical sizes (<=60pc) of the SSCs are set. In addition, and maybe most importantly, extended light emission underlying the compact star-forming region is clearly detected in both bands. The R-I color rules out nebular Hα contamination and is consistent with an old stellar population. This casts doubt on the hypothesis that Blue Compact Dwarf Galaxies (BCDG) are young galaxies. based on observations carried out at NTT in La Silla, operated by the European Southern Observatory, during Director's Discretionary Time.

  4. Maximum entropy deconvolution of the optical jet of 3C 273

    NASA Technical Reports Server (NTRS)

    Evans, I. N.; Ford, H. C.; Hui, X.

    1989-01-01

    The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.

  5. Flowthrough Reductive Catalytic Fractionation of Biomass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Eric M.; Stone, Michael L.; Katahira, Rui

    2017-11-01

    Reductive catalytic fractionation (RCF) has emerged as a leading biomass fractionation and lignin valorization strategy. Here, flowthrough reactors were used to investigate RCF of poplar. Most RCF studies to date have been conducted in batch, but a flow-based process enables the acquisition of intrinsic kinetic and mechanistic data essential to accelerate the design, optimization, and scale-up of RCF processes. Time-resolved product distributions and yields obtained from experiments with different catalyst loadings were used to identify and deconvolute events during solvolysis and hydrogenolysis. Multi-bed RCF experiments provided unique insights into catalyst deactivation, showing that leaching, sintering, and surface poisoning are causesmore » for decreased catalyst performance. The onset of catalyst deactivation resulted in higher concentrations of unsaturated lignin intermediates and increased occurrence of repolymerization reactions, producing high-molecular-weight species. Overall, this study demonstrates the concept of flowthrough RCF, which will be vital for realistic scale-up of this promising approach.« less

  6. Spectral and Temporal Laser Fluorescence Analysis Such as for Natural Aquatic Environments

    NASA Technical Reports Server (NTRS)

    Chekalyuk, Alexander (Inventor)

    2015-01-01

    An Advanced Laser Fluorometer (ALF) can combine spectrally and temporally resolved measurements of laser-stimulated emission (LSE) for characterization of dissolved and particulate matter, including fluorescence constituents, in liquids. Spectral deconvolution (SDC) analysis of LSE spectral measurements can accurately retrieve information about individual fluorescent bands, such as can be attributed to chlorophyll-a (Chl-a), phycobiliprotein (PBP) pigments, or chromophoric dissolved organic matter (CDOM), among others. Improved physiological assessments of photosynthesizing organisms can use SDC analysis and temporal LSE measurements to assess variable fluorescence corrected for SDC-retrieved background fluorescence. Fluorescence assessments of Chl-a concentration based on LSE spectral measurements can be improved using photo-physiological information from temporal measurements. Quantitative assessments of PBP pigments, CDOM, and other fluorescent constituents, as well as basic structural characterizations of photosynthesizing populations, can be performed using SDC analysis of LSE spectral measurements.

  7. Topographic profiling and refractive-index analysis by use of differential interference contrast with bright-field intensity and atomic force imaging.

    PubMed

    Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim

    2004-04-10

    A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.

  8. WISEP J061135.13-041024.0 AB: A J-band Flux Reversal Binary at the L/T Transition

    NASA Astrophysics Data System (ADS)

    Gelino, Christopher R.; Smart, R. L.; Marocco, Federico; Kirkpatrick, J. Davy; Cushing, Michael C.; Mace, Gregory; Mendez, Rene A.; Tinney, C. G.; Jones, Hugh R. A.

    2014-07-01

    We present Keck II laser guide star adaptive optics observations of the brown dwarf WISEP J061135.13-041024.0 showing it is a binary with a component separation of 0.''4. This system is one of the six known resolved binaries in which the magnitude differences between the components show a reversal in sign between the Y/J band and the H/K bands. Deconvolution of the composite spectrum results in a best-fit binary solution with L9 and T1.5 components. We also present a preliminary parallax placing the system at a distance of 21.2 ± 1.3 pc. Using the distance and resolved magnitudes we are able to place WISEP J061135.13-041024.0 AB on a color-absolute magnitude diagram, showing that this system contributes to the well-known "J-band bump" and the components' properties appear similar to other late-type L and early-type T dwarfs. Fitting our data to a set of cloudy atmosphere models suggests the system has an age >1 Gyr with WISE 0611-0410 A having an effective temperature (T eff) of 1275-1325 K and mass of 64-65 M Jup, and WISE 0611-0410 B having T eff = 1075-1115 K and mass 40-65 M Jup.

  9. The effects of bilingualism on conflict monitoring, cognitive control, and garden-path recovery.

    PubMed

    Teubner-Rhodes, Susan E; Mishler, Alan; Corbett, Ryan; Andreu, Llorenç; Sanz-Torrent, Monica; Trueswell, John C; Novick, Jared M

    2016-05-01

    Bilinguals demonstrate benefits on non-linguistic tasks requiring cognitive control-the regulation of mental activity to resolve information-conflict during processing. This "bilingual advantage" has been attributed to the consistent management of two languages, yet it remains unknown if these benefits extend to sentence processing. In monolinguals, cognitive control helps detect and revise misinterpretations of sentence meaning. Here, we test if the bilingual advantage extends to parsing and interpretation by comparing bilinguals' and monolinguals' syntactic ambiguity resolution before and after practicing N-back, a non-syntactic cognitive-control task. Bilinguals outperformed monolinguals on a high-conflict but not a no-conflict version of N-back and on sentence comprehension, indicating that the advantage extends to language interpretation. Gains on N-back conflict trials also predicted comprehension improvements for ambiguous sentences, suggesting that the bilingual advantage emerges across tasks tapping shared cognitive-control procedures. Because the overall task benefits were observed for conflict and non-conflict trials, bilinguals' advantage may reflect increased cognitive flexibility. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Online interpretation of scalar quantifiers: insight into the semantics-pragmatics interface.

    PubMed

    Huang, Yi Ting; Snedeker, Jesse

    2009-05-01

    Scalar implicature has served as a test case for exploring the relations between semantic and pragmatic processes during language comprehension. Most studies have used reaction time methods and the results have been variable. In these studies, we use the visual-world paradigm to investigate implicature. We recorded participants' eye movements during commands like "Point to the girl that has some of the socks" in the presence of a display in which one girl had two of four socks and another had three of three soccer balls. These utterances contained an initial period of ambiguity in which the semantics of some was compatible with both characters. This ambiguity could be immediately resolved by a pragmatic implicature which would restrict some to a proper subset. Instead in Experiments 1 and 2, we found that participants were substantially delayed, suggesting a lag between semantic and pragmatic processing. In Experiment 3, we examined interpretations of some when competitors were inconsistent with the semantics (girl with socks vs. girl with no socks). We found quick resolution of the target, suggesting that previous delays were specifically linked to pragmatic analysis.

  11. GPS-Like Phasing Control of the Space Solar Power System Transmission Array

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    2003-01-01

    The problem of phasing of the Space Solar Power System's transmission array has been addressed by developing a GPS-like radio navigation system. The goal of this system is to provide power transmission phasing control for each node of the array that causes the power signals to add constructively at the ground reception station. The phasing control system operates in a distributed manner, which makes it practical to implement. A leader node and two radio navigation beacons are used to control the power transmission phasing of multiple follower nodes. The necessary one-way communications to the follower nodes are implemented using the RF beacon signals. The phasing control system uses differential carrier phase relative navigation/timing techniques. A special feature of the system is an integer ambiguity resolution procedure that periodically resolves carrier phase cycle count ambiguities via encoding of pseudo-random number codes on the power transmission signals. The system is capable of achieving phasing accuracies on the order of 3 mm down to 0.4 mm depending on whether the radio navigation beacons operate in the L or C bands.

  12. Resolving combinatorial ambiguities in dilepton t t¯ event topologies with constrained M2 variables

    NASA Astrophysics Data System (ADS)

    Debnath, Dipsikha; Kim, Doojin; Kim, Jeong Han; Kong, Kyoungchul; Matchev, Konstantin T.

    2017-10-01

    We advocate the use of on-shell constrained M2 variables in order to mitigate the combinatorial problem in supersymmetry-like events with two invisible particles at the LHC. We show that in comparison to other approaches in the literature, the constrained M2 variables provide superior ansätze for the unmeasured invisible momenta and therefore can be usefully applied to discriminate combinatorial ambiguities. We illustrate our procedure with the example of dilepton t t ¯ events. We critically review the existing methods based on the Cambridge MT 2 variable and MAOS reconstruction of invisible momenta, and show that their algorithm can be simplified without loss of sensitivity, due to a perfect correlation between events with complex solutions for the invisible momenta and events exhibiting a kinematic endpoint violation. Then we demonstrate that the efficiency for selecting the correct partition is further improved by utilizing the M2 variables instead. Finally, we also consider the general case when the underlying mass spectrum is unknown, and no kinematic endpoint information is available.

  13. A Bat's-Eye View of Holocene Climate Change in the Southwest: Resolving Ambiguities in Cave Isotopic Records

    NASA Astrophysics Data System (ADS)

    Cole, J. E.; Truebe, S. A.; Harrington, M. D.; Woodhead, J. D.; Overpeck, J. T.; Hlohowskyj, S.; Henderson, G. M.

    2015-12-01

    In dry environments, speleothems provide an outstanding archive of information on past climate change, particularly since lakes are typically absent or intermittent. Speleothem stable isotopes are widely used for climate reconstruction, but the isotope-climate relationship is complex in arid-region precipitation, and within-cave processes further complicate climate interpretations. Our isotope results from 3 southeastern Arizona caves, spanning the past 3.5-12 kyr, collectively indicate a weakening monsoon from 7kyr to present. These records exhibit substantial multidecadal-multicentury variability that is sometimes shared, and sometimes independent among caves. Strategies to overcome ambiguities in isotope records include long-term monitoring of cave dripwaters, multi-site comparisons, and multiproxy measurements. Monthly dripwater measurements from two caves spanning several years highlight substantial seasonal biases that create distinct differences in the climate sensitivity of individual cave records. These biases can lead to lack of correlation between records, but also creates opportunities for seasonally specific moisture reconstructions. New preliminary analyses suggest that elemental data can help to unravel the multivariate signal contained in speleothem oxygen isotope records.

  14. Dorsomedial striatum involvement in regulating conflict between current and presumed outcomes.

    PubMed

    Mestres-Missé, Anna; Bazin, Pierre-Louis; Trampel, Robert; Turner, Robert; Kotz, Sonja A

    2014-09-01

    The balance between automatic and controlled processing is essential to human flexible but optimal behavior. On the one hand, the automation of habitual behavior and processing is indispensable, and, on the other hand, strategic processing is needed in light of unexpected, conflicting, or new situations. Using ultra-high-field high-resolution functional magnetic resonance imaging (7T-fMRI), the present study examined the role of subcortical structures in mediating this balance. Participants were asked to judge the congruency of sentences containing a semantically ambiguous or unambiguous word. Ambiguous sentences had three possible resolutions: dominant meaning, subordinate meaning, and incongruent. The dominant interpretation represents the most habitual response, whereas both the subordinate and incongruent options clash with this automatic response, and, hence, require cognitive control. Moreover, the subordinate resolution entails a less expected but correct outcome, while the incongruent condition is simply wrong. The current results reveal the involvement of the anterior dorsomedial striatum in modulating and resolving conflict between actual and expected outcomes, and highlight the importance of cortical and subcortical cooperation in this process. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Method for ambiguity resolution in range-Doppler measurements

    NASA Technical Reports Server (NTRS)

    Heymsfield, Gerald M. (Inventor); Miller, Lee S. (Inventor)

    1994-01-01

    A method for resolving range and Doppler target ambiguities when the target has substantial range or has a high relative velocity in which a first signal is generated and a second signal is also generated which is coherent with the first signal but at a slightly different frequency such that there exists a difference in frequency between these two signals of Delta f(sub t). The first and second signals are converted into a dual-frequency pulsed signal, amplified, and the dual-frequency pulsed signal is transmitted towards a target. A reflected dual-frequency signal is received from the target, amplified, and changed to an intermediate dual-frequency signal. The intermediate dual-frequency signal is amplified, with extracting of a shifted difference frequency Delta f(sub r) from the amplified intermediate dual-frequency signal done by a nonlinear detector. The final step is generating two quadrature signals from the difference frequency Delta f(sub t) and the shifted difference frequency Delta f(sub r) and processing the two quadrature signals to determine range and Doppler information of the target.

  16. 3D MHD Models of Active Region Loops

    NASA Technical Reports Server (NTRS)

    Ofman, Leon

    2004-01-01

    Present imaging and spectroscopic observations of active region loops allow to determine many physical parameters of the coronal loops, such as the density, temperature, velocity of flows in loops, and the magnetic field. However, due to projection effects many of these parameters remain ambiguous. Three dimensional imaging in EUV by the STEREO spacecraft will help to resolve the projection ambiguities, and the observations could be used to setup 3D MHD models of active region loops to study the dynamics and stability of active regions. Here the results of 3D MHD models of active region loops are presented, and the progress towards more realistic 3D MHD models of active regions. In particular the effects of impulsive events on the excitation of active region loop oscillations, and the generation, propagations and reflection of EIT waves are shown. It is shown how 3D MHD models together with 3D EUV observations can be used as a diagnostic tool for active region loop physical parameters, and to advance the science of the sources of solar coronal activity.

  17. Harnessing mtDNA variation to resolve ambiguity in ‘Redfish’ sold in Europe

    PubMed Central

    Moore, Lauren; Pampoulie, Christophe; Di Muri, Cristina; Vandamme, Sara; Mariani, Stefano

    2017-01-01

    Morphology-based identification of North Atlantic Sebastes has long been controversial and misidentification may produce misleading data, with cascading consequences that negatively affect fisheries management and seafood labelling. North Atlantic Sebastes comprises of four species, commonly known as ‘redfish’, but little is known about the number, identity and labelling accuracy of redfish species sold across Europe. We used a molecular approach to identify redfish species from ‘blind’ specimens to evaluate the performance of the Barcode of Life (BOLD) and Genbank databases, as well as carrying out a market product accuracy survey from retailers across Europe. The conventional BOLD approach proved ambiguous, and phylogenetic analysis based on mtDNA control region sequences provided a higher resolution for species identification. By sampling market products from four countries, we found the presence of two species of redfish (S. norvegicus and S. mentella) and one unidentified Pacific rockfish marketed in Europe. Furthermore, public databases revealed the existence of inaccurate reference sequences, likely stemming from species misidentification from previous studies, which currently hinders the efficacy of DNA methods for the identification of Sebastes market samples. PMID:29018597

  18. An Exploration of Managers’ Discourses of Workplace Bullying

    PubMed Central

    Johnson, Susan L.; Boutain, Doris M.; Tsai, Jenny Hsin-Chun; Beaton, Randal; de Castro, Arnold B.

    2017-01-01

    AIM To identify discourses used by hospital nursing unit managers to characterize workplace bullying, and their roles and responsibilities in workplace bullying management. BACKGROUND Nurses around the world have reported being the targets of bullying. These nurses often report that their managers do not effectively help them resolve the issue. There is scant research that examines this topic from the perspective of managers. METHODS This was a descriptive, qualitative study. Interviews were conducted with hospital nursing unit managers who were recruited via purposive and snowball sampling. Data were analyzed using Willig’s Foucauldian discourse analysis. RESULTS Managers characterized bullying as an interpersonal issue involving the target and the perpetrator, as an intrapersonal issue attributable to characteristics of the perpetrator, or as an ambiguous situation. For interpersonal bullying, managers described supporting target’s efforts to end bullying; for intrapersonal bullying, they described taking primary responsibility; and for ambiguous situations, they described several actions, including doing nothing. CONCLUSION Managers have different responses to different categories of bullying. Efforts need to be made to make sure they are correctly identifying and appropriately responding to incidents of workplace bullying. PMID:25597260

  19. Helical filaments of human Dmc1 protein on single-stranded DNA: a cautionary tale.

    PubMed

    Yu, Xiong; Egelman, Edward H

    2010-08-20

    Proteins in the RecA/Rad51/RadA family form nucleoprotein filaments on DNA that catalyze a strand exchange reaction as part of homologous genetic recombination. Because of the centrality of this system to many aspects of DNA repair, the generation of genetic diversity, and cancer when this system fails or is not properly regulated, these filaments have been the object of many biochemical and biophysical studies. A recent paper has argued that the human Dmc1 protein, a meiotic homolog of bacterial RecA and human Rad51, forms filaments on single-stranded DNA with approximately 9 subunits per turn in contrast to the filaments formed on double-stranded DNA with approximately 6.4 subunits per turn and that the stoichiometry of DNA binding is different between these two filaments. We show using scanning transmission electron microscopy that the Dmc1 filament formed on single-stranded DNA has a mass per unit length expected from approximately 6.5 subunits per turn. More generally, we show how ambiguities in helical symmetry determination can generate incorrect solutions and why one sometimes must use other techniques, such as biochemistry, metal shadowing, or scanning transmission electron microscopy, to resolve these ambiguities. While three-dimensional reconstruction of helical filaments from EM images is a powerful tool, the intrinsic ambiguities that may be present with limited resolution are not sufficiently appreciated. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  20. Real-time single-frequency GPS/MEMS-IMU attitude determination of lightweight UAVs.

    PubMed

    Eling, Christian; Klingbeil, Lasse; Kuhlmann, Heiner

    2015-10-16

    In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5°) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05° for the roll and the pitch angle and 0.2° for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases.

  1. Towards robust deconvolution of low-dose perfusion CT: sparse perfusion deconvolution using online dictionary learning.

    PubMed

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C

    2013-05-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Deconvolution of azimuthal mode detection measurements

    NASA Astrophysics Data System (ADS)

    Sijtsma, Pieter; Brouwer, Harry

    2018-05-01

    Unequally spaced transducer rings make it possible to extend the range of detectable azimuthal modes. The disadvantage is that the response of the mode detection algorithm to a single mode is distributed over all detectable modes, similarly to the Point Spread Function of Conventional Beamforming with microphone arrays. With multiple modes the response patterns interfere, leading to a relatively high "noise floor" of spurious modes in the detected mode spectrum, in other words, to a low dynamic range. In this paper a deconvolution strategy is proposed for increasing this dynamic range. It starts with separating the measured sound into shaft tones and broadband noise. For broadband noise modes, a standard Non-Negative Least Squares solver appeared to be a perfect deconvolution tool. For shaft tones a Matching Pursuit approach is proposed, taking advantage of the sparsity of dominant modes. The deconvolution methods were applied to mode detection measurements in a fan rig. An increase in dynamic range of typically 10-15 dB was found.

  3. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  4. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  5. Joint deconvolution and classification with applications to passive acoustic underwater multipath.

    PubMed

    Anderson, Hyrum S; Gupta, Maya R

    2008-11-01

    This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.

  6. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  7. A new scoring function for top-down spectral deconvolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Qiang; Wu, Si; Liu, Xiaowen

    2014-12-18

    Background: Top-down mass spectrometry plays an important role in intact protein identification and characterization. Top-down mass spectra are more complex than bottom-up mass spectra because they often contain many isotopomer envelopes from highly charged ions, which may overlap with one another. As a result, spectral deconvolution, which converts a complex top-down mass spectrum into a monoisotopic mass list, is a key step in top-down spectral interpretation. Results: In this paper, we propose a new scoring function, L-score, for evaluating isotopomer envelopes. By combining L-score with MS-Deconv, a new software tool, MS-Deconv+, was developed for top-down spectral deconvolution. Experimental results showedmore » that MS-Deconv+ outperformed existing software tools in top-down spectral deconvolution. Conclusions: L-score shows high discriminative ability in identification of isotopomer envelopes. Using L-score, MS-Deconv+ reports many correct monoisotopic masses missed by other software tools, which are valuable for proteoform identification and characterization.« less

  8. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  9. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    PubMed Central

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  10. 2060 Chiron - Colorimetry and cometary behavior

    NASA Technical Reports Server (NTRS)

    Hartmann, William K.; Tholen, David J.; Meech, Karen J.; Cruikshank, Dale P.

    1990-01-01

    Ambiguities concerning the fit of the 2060 Chiron's visible spectrum to its IR spectrum have been resolved by resort to VRIJHK colorimetry obtained in 1988, which also confirms the neutrality of Chiron's taxonomic class C spectrum and indicates that Chiron has anomalously brightened since 1980-1983. This brightening, and one reported in 1978, are consistent with the hypothesis that Chiron sporadically undergoes weak cometary outbursts similar to those of comet P/Schwassmann-Wachmann 1; Chiron is further speculated to be an ice-rich object darkened by C-class carbonaceous soil, and may have been scattered from the Oort cloud in recent solar system history.

  11. Color-coordinate system from a 13th-century account of rainbows.

    PubMed

    Smithson, Hannah E; Anderson, Philip S; Dinkova-Bruun, Greti; Fosbury, Robert A E; Gasper, Giles E M; Laven, Philip; McLeish, Tom C B; Panti, Cecilia; Tanner, Brian K

    2014-04-01

    We present a new analysis of Robert Grosseteste's account of color in his treatise De iride (On the Rainbow), dating from the early 13th century. The work explores color within the 3D framework set out in Grosseteste's De colore [see J. Opt. Soc. Am. A29, A346 (2012)], but now links the axes of variation to observable properties of rainbows. We combine a modern understanding of the physics of rainbows and of human color perception to resolve the linguistic ambiguities of the medieval text and to interpret Grosseteste's key terms.

  12. Thermal equilibrium and statistical thermometers in special relativity.

    PubMed

    Cubero, David; Casado-Pascual, Jesús; Dunkel, Jörn; Talkner, Peter; Hänggi, Peter

    2007-10-26

    There is an intense debate in the recent literature about the correct generalization of Maxwell's velocity distribution in special relativity. The most frequently discussed candidate distributions include the Jüttner function as well as modifications thereof. Here we report results from fully relativistic one-dimensional molecular dynamics simulations that resolve the ambiguity. The numerical evidence unequivocally favors the Jüttner distribution. Moreover, our simulations illustrate that the concept of "thermal equilibrium" extends naturally to special relativity only if a many-particle system is spatially confined. They make evident that "temperature" can be statistically defined and measured in an observer frame independent way.

  13. Theory of post-block 2 VLBI observable extraction

    NASA Technical Reports Server (NTRS)

    Lowe, Stephen T.

    1992-01-01

    The algorithms used in the post-Block II fringe-fitting software called 'Fit' are described. The steps needed to derive the very long baseline interferometry (VLBI) charged-particle corrected group delay, phase delay rate, and phase delay (the latter without resolving cycle ambiguities) are presented beginning with the set of complex fringe phasors as a function of observation frequency and time. The set of complex phasors is obtained from the JPL/CIT Block II correlator. The output of Fit is the set of charged-particle corrected observables (along with ancillary information) in a form amenable to the software program 'Modest.'

  14. Measuring Black Hole Spin

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    1999-09-01

    WE PROPOSE TO CARRY OUT A SYSTEMATIC STUDY OF EMISSION AND ABSORPTION SPECTRAL FEATURES THAT ARE OFTEN SEEN IN X-RAY SPECTRA OF BLACK HOLE BINARIES. THE EXCELLENT SENSITIVITY AND ENERGY RESOLUTION OF THE ACIS/HETG COMBINATION WILL NOT ONLY HELP RESOLVE AMBIGUITIES IN INTERPRETING THESE FEATURES, BUT MAY ALLOW MODELLING OF THE EMISSION LINE PROFILES IN DETAIL. THE PROFILES MAY CONTAIN INFORMATION ON SUCH FUNDAMENTAL PROPERTIES AS THE SPIN OF BLACK HOLES. THEREFORE, THIS STUDY COULD LEAD TO A MEASUREMENT OF BLACK HOLE SPIN FOR SELECTED SOURCES. THE RESULT CAN THEN BE DIRECTLY COMPARED WITH THOSE FROM PREVIOUS STUDIES BASED ON INDEPENDENT METHODS.

  15. Beyond "objective" and "projective": a logical system for classifying psychological tests: comment on Meyer and Kurtz (2006).

    PubMed

    Wagner, Edwin E

    2008-07-01

    I present a formal system that accounts for the misleading distinction between tests formerly termed objective and projective, duly noted by Meyer and Kurtz (2006). Three principles of Response Rightness, Response Latitude and Stimulus Ambiguity are shown to govern, in combination, the formal operating characteristics of tests, producing inevitable overlap between "objective" and "projective" tests and creating at least three "types" of tests historically regarded as being projective in nature. The system resolves many past issues regarding test classification and can be generalized to include all psychological tests.

  16. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  17. CHEERS Results on Mrk 573: A Study of Deep Chandra Observations

    NASA Astrophysics Data System (ADS)

    Paggi, Alessandro; Wang, Junfeng; Fabbiano, Giuseppina; Elvis, Martin; Karovska, Margarita

    2012-09-01

    We present results on Mrk 573 obtained as part of the CHandra survey of Extended Emission-line Regions in nearby Seyfert galaxies (CHEERS). Previous studies showed that this source features a biconical emission in the soft X-ray band closely related to the narrow-line region as mapped by the [O III] emission line and the radio emission, though on a smaller scale; we investigate the properties of soft X-ray emission from this source with new deep Chandra observations. Making use of the subpixel resolution of the Chandra/ACIS image and point-spread function deconvolution, we resolve and study substructures in each ionizing cone. The two cone spectra are fitted with a photoionization model, showing a mildly photoionized phase diffused over the bicone. Thermal collisional gas at about ~1.1 keV and ~0.8 keV appears to be located between the nucleus and the "knots" resolved in radio observations, and between the "arcs" resolved in the optical images, respectively; this can be interpreted in terms of shock interaction with the host galactic plane. The nucleus shows a significant flux decrease across the observations indicating variability of the active galactic nucleus (AGN), with the nuclear region featuring a higher ionization parameter with respect to the bicone region. The long exposure allows us to find extended emission up to ~7 kpc from the nucleus along the bicone axis. Significant emission is also detected in the direction perpendicular to the ionizing cones, disagreeing with the fully obscuring torus prescribed in the AGN unified model and suggesting instead the presence of a clumpy structure.

  18. Processing of single channel air and water gun data for imaging an impact structure at the Chesapeake Bay

    USGS Publications Warehouse

    Lee, Myung W.

    1999-01-01

    Processing of 20 seismic profiles acquired in the Chesapeake Bay area aided in analysis of the details of an impact structure and allowed more accurate mapping of the depression caused by a bolide impact. Particular emphasis was placed on enhancement of seismic reflections from the basement. Application of wavelet deconvolution after a second zero-crossing predictive deconvolution improved the resolution of shallow reflections, and application of a match filter enhanced the basement reflections. The use of deconvolution and match filtering with a two-dimensional signal enhancement technique (F-X filtering) significantly improved the interpretability of seismic sections.

  19. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  20. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  1. Windprofiler optimization using digital deconvolution procedures

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.

  2. Angle-resolved spectral Fabry-Pérot interferometer for single-shot measurement of refractive index dispersion over a broadband spectrum

    NASA Astrophysics Data System (ADS)

    Dong, J. T.; Ji, F.; Xia, H. J.; Liu, Z. J.; Zhang, T. D.; Yang, L.

    2018-01-01

    An angle-resolved spectral Fabry-Pérot interferometer is reported for fast and accurate measurement of the refractive index dispersion of optical materials with parallel plate shape. The light sheet from the wavelength tunable laser is incident on the parallel plate with converging angles. The transmitted interference light for each angle is dispersed and captured by a 2D sensor, in which the rows and the columns are used to simultaneously record the intensities as a function of wavelength and incident angle, respectively. The interferogram, named angle-resolved spectral intensity distribution, is analyzed by fitting the phase information instead of finding the fringe peak locations that present periodic ambiguity. The refractive index dispersion and the physical thickness can be then retrieved from a single-shot interferogram within 18 s. Experimental results of an optical substrate standard indicate that the accuracy of the refractive index dispersion is less than 2.5  ×  10-5 and the relative uncertainty of the thickness is 6  ×  10-5 mm (3σ) due to the high stability and the single-shot measurement of the proposed system.

  3. DNA barcoding for conservation, seed banking and ecological restoration of Acacia in the Midwest of Western Australia.

    PubMed

    Nevill, Paul G; Wallace, Mark J; Miller, Joseph T; Krauss, Siegfried L

    2013-11-01

    We used DNA barcoding to address an important conservation issue in the Midwest of Western Australia, working on Australia's largest genus of flowering plant. We tested whether or not currently recommended plant DNA barcoding regions (matK and rbcL) were able to discriminate Acacia taxa of varying phylogenetic distances, and ultimately identify an ambiguously labelled seed collection from a mine-site restoration project. Although matK successfully identified the unknown seed as the rare and conservation priority listed A. karina, and was able to resolve six of the eleven study species, this region was difficult to amplify and sequence. In contrast, rbcL was straightforward to recover and align, but could not determine the origin of the seed and only resolved 3 of the 11 species. Other chloroplast regions (rpl32-trnL, psbA-trnH, trnL-F and trnK) had mixed success resolving the studied taxa. In general, species were better resolved in multilocus data sets compared to single-locus data sets. We recommend using the formal barcoding regions supplemented with data from other plastid regions, particularly rpl32-trnL, for barcoding in Acacia. Our study demonstrates the novel use of DNA barcoding for seed identification and illustrates the practical potential of DNA barcoding for the growing discipline of restoration ecology. © 2013 John Wiley & Sons Ltd.

  4. Confocal Raman spectroscopic analysis of cross-linked ultra-high molecular weight polyethylene for application in artificial hip joints.

    PubMed

    Pezzotti, Giuseppe; Kumakura, Tsuyoshi; Yamada, Kiyotaka; Tateiwa, Toshiyuki; Puppulin, Leonardo; Zhu, Wenliang; Yamamoto, Kengo

    2007-01-01

    Confocal spectroscopic techniques are applied to selected Raman bands to study the microscopic features of acetabular cups made of ultra-high molecular weight polyethylene (UHMWPE) before and after implantation in vivo. The micrometric lateral resolution of a laser beam focused on the polymeric surface (or subsurface) enables a highly resolved visualization of 2-D conformational population patterns, including crystalline, amorphous, orthorhombic phase fractions, and oxidation index. An optimized confocal probe configuration, aided by a computational deconvolution of the optical probe, allows minimization of the probe size along the in-depth direction and a nondestructive evaluation of microstructural properties along the material subsurface. Computational deconvolution is also attempted, based on an experimental assessment of the probe response function of the polyethylene Raman spectrum, according to a defocusing technique. A statistical set of high-resolution microstructural data are collected on a fully 3-D level on gamma-ray irradiated UHMWPE acetabular cups both as-received from the maker and after retrieval from a human body. Microstructural properties reveal significant gradients along the immediate material subsurface and distinct differences are found due to the loading history in vivo, which cannot be revealed by conventional optical spectroscopy. The applicability of the confocal spectroscopic technique is valid beyond the particular retrieval cases examined in this study, and can be easily extended to evaluate in-vitro tested components or to quality control of new polyethylene brands. Confocal Raman spectroscopy may also contribute to rationalize the complex effects of gamma-ray irradiation on the surface of medical grade UHMWPE for total joint replacement and, ultimately, to predict their actual lifetime in vivo.

  5. Upgrade of a Scanning Confocal Microscope to a Single-Beam Path STED Microscope

    PubMed Central

    Klauss, André; König, Marcelle; Hille, Carsten

    2015-01-01

    By overcoming the diffraction limit in light microscopy, super-resolution techniques, such as stimulated emission depletion (STED) microscopy, are experiencing an increasing impact on life sciences. High costs and technically demanding setups, however, may still hinder a wider distribution of this innovation in biomedical research laboratories. As far-field microscopy is the most widely employed microscopy modality in the life sciences, upgrading already existing systems seems to be an attractive option for achieving diffraction-unlimited fluorescence microscopy in a cost-effective manner. Here, we demonstrate the successful upgrade of a commercial time-resolved confocal fluorescence microscope to an easy-to-align STED microscope in the single-beam path layout, previously proposed as “easy-STED”, achieving lateral resolution < λ/10 corresponding to a five-fold improvement over a confocal modality. For this purpose, both the excitation and depletion laser beams pass through a commercially available segmented phase plate that creates the STED-doughnut light distribution in the focal plane, while leaving the excitation beam unaltered when implemented into the joint beam path. Diffraction-unlimited imaging of 20 nm-sized fluorescent beads as reference were achieved with the wavelength combination of 635 nm excitation and 766 nm depletion. To evaluate the STED performance in biological systems, we compared the popular phalloidin-coupled fluorescent dyes Atto647N and Abberior STAR635 by labeling F-actin filaments in vitro as well as through immunofluorescence recordings of microtubules in a complex epithelial tissue. Here, we applied a recently proposed deconvolution approach and showed that images obtained from time-gated pulsed STED microscopy may benefit concerning the signal-to-background ratio, from the joint deconvolution of sub-images with different spatial information which were extracted from offline time gating. PMID:26091552

  6. A Dust-scattering Halo of 4U 1630–47 Observed with Chandra and Swift: New Constraints on the Source Distance

    NASA Astrophysics Data System (ADS)

    Kalemci, E.; Maccarone, T. J.; Tomsick, J. A.

    2018-06-01

    We have observed the Galactic black hole transient 4U 1630‑47 during the decay of its 2016 outburst with Chandra and Swift to investigate the properties of the dust-scattering halo created by the source. The scattering halo shows a structure that includes a bright ring between 80″ and 240″ surrounding the source, and a continuous distribution beyond 250″. An analysis of the 12CO J = 1–0 map and spectrum in the line of sight to the source indicates that a molecular cloud with a radial velocity of ‑79 km s‑1 (denoted MC ‑79) is the main scattering body that creates the bright ring. We found additional clouds in the line of sight, calculated their kinematic distances, and resolved the well known “near” and “far” distance ambiguity for most of the clouds. At the favored far-distance estimate of MC ‑79, the modeling of the surface brightness profile results in a distance to 4U 1630‑47 of 11.5 ± 0.3 kpc. If MC ‑79 is at the near distance, then 4U 1630‑47 is at 4.7 ± 0.3 kpc. Future Chandra, Swift, and submillimeter radio observations not only can resolve this ambiguity, but also would provide information regarding properties of dust and the distribution of all molecular clouds along the line of sight. Using the results of this study we also discuss the nature of this source and the reasons for the observation of an anomalously low soft state during the 2010 decay.

  7. Neural network and letter recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hue Yeon.

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C-layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken themore » on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the Gabor transform. Pattern dependent choice of center and wavelengths of Gabor filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets.« less

  8. Seven new dolphin mitochondrial genomes and a time-calibrated phylogeny of whales

    PubMed Central

    Xiong, Ye; Brandley, Matthew C; Xu, Shixia; Zhou, Kaiya; Yang, Guang

    2009-01-01

    Background The phylogeny of Cetacea (whales) is not fully resolved with substantial support. The ambiguous and conflicting results of multiple phylogenetic studies may be the result of the use of too little data, phylogenetic methods that do not adequately capture the complex nature of DNA evolution, or both. In addition, there is also evidence that the generic taxonomy of Delphinidae (dolphins) underestimates its diversity. To remedy these problems, we sequenced the complete mitochondrial genomes of seven dolphins and analyzed these data with partitioned Bayesian analyses. Moreover, we incorporate a newly-developed "relaxed" molecular clock to model heterogenous rates of evolution among cetacean lineages. Results The "deep" phylogenetic relationships are well supported including the monophyly of Cetacea and Odontoceti. However, there is ambiguity in the phylogenetic affinities of two of the river dolphin clades Platanistidae (Indian River dolphins) and Lipotidae (Yangtze River dolphins). The phylogenetic analyses support a sister relationship between Delphinidae and Monodontidae + Phocoenidae. Additionally, there is statistically significant support for the paraphyly of Tursiops (bottlenose dolphins) and Stenella (spotted dolphins). Conclusion Our phylogenetic analysis of complete mitochondrial genomes using recently developed models of rate autocorrelation resolved the phylogenetic relationships of the major Cetacean lineages with a high degree of confidence. Our results indicate that a rapid radiation of lineages explains the lack of support the placement of Platanistidae and Lipotidae. Moreover, our estimation of molecular divergence dates indicates that these radiations occurred in the Middle to Late Oligocene and Middle Miocene, respectively. Furthermore, by collecting and analyzing seven new mitochondrial genomes, we provide strong evidence that the delphinid genera Tursiops and Stenella are not monophyletic, and the current taxonomy masks potentially interesting patterns of morphological, physiological, behavioral, and ecological evolution. PMID:19166626

  9. Strehl-constrained iterative blind deconvolution for post-adaptive-optics data

    NASA Astrophysics Data System (ADS)

    Desiderà, G.; Carbillet, M.

    2009-12-01

    Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.

  10. Histogram deconvolution - An aid to automated classifiers

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  11. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  12. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  13. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  14. Nimbus 7 earth radiation budget wide field of view climate data set improvement. I - The earth albedo from deconvolution of shortwave measurements

    NASA Technical Reports Server (NTRS)

    Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee

    1987-01-01

    A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.

  15. Deconvolution of astronomical images using SOR with adaptive relaxation.

    PubMed

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  16. Gaussian and linear deconvolution of LC-MS/MS chromatograms of the eight aminobutyric acid isomers

    PubMed Central

    Vemula, Harika; Kitase, Yukiko; Ayon, Navid J.; Bonewald, Lynda; Gutheil, William G.

    2016-01-01

    Isomeric molecules present a challenge for analytical resolution and quantification, even with MS-based detection. The eight-aminobutyric acid (ABA) isomers are of interest for their various biological activities, particularly γ-aminobutyric acid (GABA) and the d- and l-isomers of β-aminoisobutyric acid (β-AIBA; BAIBA). This study aimed to investigate LC-MS/MS-based resolution of these ABA isomers as their Marfey's (Mar) reagent derivatives. HPLC was able to separate three Mar-ABA isomers l-β-ABA (l-BABA), and l- and d-α-ABA (AABA) completely, with three isomers (GABA, and d/l-BAIBA) in one chromatographic cluster, and two isomers (α-AIBA (AAIBA) and d-BABA) in a second cluster. Partially separated cluster components were deconvoluted using Gaussian peak fitting except for GABA and d-BAIBA. MS/MS detection of Marfey's derivatized ABA isomers provided six MS/MS fragments, with substantially different intensity profiles between structural isomers. This allowed linear deconvolution of ABA isomer peaks. Combining HPLC separation with linear and Gaussian deconvolution allowed resolution of all eight ABA isomers. Application to human serum found a substantial level of l-AABA (13 μM), an intermediate level of l-BAIBA (0.8 μM), and low but detectable levels (<0.2 μM) of GABA, l-BABA, AAIBA, d-BAIBA, and d-AABA. This approach should be useful for LC-MS/MS deconvolution of other challenging groups of isomeric molecules. PMID:27771391

  17. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    PubMed

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  18. Improvements in GPS precision: 10 Hz to one day

    NASA Astrophysics Data System (ADS)

    Choi, Kyuhong

    Seeking to understand Global Positioning System (GPS) measurements and the positioning solutions in various time intervals, this dissertation improves the consistency of pseudorange measurements from different receiver types, processes 30 s interval data with optimized filtering techniques, and analyzes very-high-rate data with short arc lengths and baseline noise. The first project studies satellite-dependent biases between C/A and P1 codes. Calibrating these biases reduces the inconsistency of satellite clocks, improving the ambiguity resolution which allows for higher position precision. Receiver-dependent biases for two receivers are compared with the bias products of Center for Orbit Determination in Europe (CODE). Baseline lengths ranging up to ˜2,100km are tested with the receiver-specific biases; they resolve more phase ambiguity by 4.3% than using CODE's products. The second project analyzes 1 s and 30 s interval GPS data of the 2003 Tokachi-Oki earthquake. For 1 Hz positioning, Iterative Tropospheric Estimation (ITE) method improves vertical precision. While equalized sidereal filtering reduces noise for multipath-dominant 30--300 s periods, it can cause long-term drifts in the timeseries. A study of postseismic deformation after the Tokachi-Oki earthquake uses 30 s interval position estimations to test multiple filtering strategies to maximize precision using lower-rate data. On top of the residual stacking, estimation of a random walk constraint of sigmaDelta = 1.80 cm/ hr shows maximum noise reduction capability while retaining the real deformation signal. These techniques enhance our grasp of fault response in the aftermath of great earthquakes. The third project probes noise floor characteristics of very-high-rate (> 1 Hz) GPS data. A hybrid method, designed and tested to resolve phase biases, minimizes computational burdens while keeping the quality of ambiguity-fixed solutions. Noise characteristics are compared after an analysis of 5 and 10 Hz Ashtech MicroZ and ZFX as well as Trimble NetRS receivers. The Trimble NetRS receiver noise has a timeseries standard deviation double that of Ashtech MicroZ receivers. Also, the power spectral density function has a 0.1 Hz peak. Noise power shows white noise for the frequency range from 2 Hz and higher. Each research project assesses the methods to reduce the noises and/or biases in various time intervals. Each method considered in this dissertation will fulfill the needs for scientific applications.

  19. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  20. Surface determination through atomically resolved secondary-electron imaging

    PubMed Central

    Ciston, J.; Brown, H. G.; D'Alfonso, A. J.; Koirala, P.; Ophus, C.; Lin, Y.; Suzuki, Y.; Inada, H.; Zhu, Y.; Allen, L. J.; Marks, L. D.

    2015-01-01

    Unique determination of the atomic structure of technologically relevant surfaces is often limited by both a need for homogeneous crystals and ambiguity of registration between the surface and bulk. Atomically resolved secondary-electron imaging is extremely sensitive to this registration and is compatible with faceted nanomaterials, but has not been previously utilized for surface structure determination. Here we report a detailed experimental atomic-resolution secondary-electron microscopy analysis of the c(6 × 2) reconstruction on strontium titanate (001) coupled with careful simulation of secondary-electron images, density functional theory calculations and surface monolayer-sensitive aberration-corrected plan-view high-resolution transmission electron microscopy. Our work reveals several unexpected findings, including an amended registry of the surface on the bulk and strontium atoms with unusual seven-fold coordination within a typically high surface coverage of square pyramidal TiO5 units. Dielectric screening is found to play a critical role in attenuating secondary-electron generation processes from valence orbitals. PMID:26082275

  1. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII): Spatially Resolved Spectroscopy in the Far-Infrared

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2009-01-01

    Astronomical studies at infrared wavelengths have dramatically improved our understanding of the universe, and observations with Spitzer, the upcoming Herschel mission, and SOFIA will continue to provide exciting new discoveries. The relatively low angular resolution of these missions, however, is insufficient to resolve the physical scale on which mid-to far-infrared emission arises, resulting in source and structure ambiguities that limit our ability to answer key science questions. Interferometry enables high angular resolution at these wavelengths - a powerful tool for scientific discovery. We will build the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII), an eight-meter baseline Michelson stellar interferometer to fly on a high-altitude balloon. BETTII's spectral-spatial capability, provided by an instrument using double-Fourier techniques, will address key questions about the nature of disks in young star clusters and active galactic nuclei and the envelopes of evolved stars. BETTII will also lay the technological groundwork for future space interferometers and for suborbital programs optimized for studying extrasolar planets.

  2. Kato perturbative expansion in classical mechanics and an explicit expression for the Deprit generator

    NASA Astrophysics Data System (ADS)

    Nikolaev, A. S.

    2015-03-01

    We study the structure of the canonical Poincaré-Lindstedt perturbation series in the Deprit operator formalism and establish its connection to the Kato resolvent expansion. A discussion of invariant definitions for averaging and integrating perturbation operators and their canonical identities reveals a regular pattern in the series for the Deprit generator. This regularity is explained using Kato series and the relation of the perturbation operators to the Laurent coefficients for the resolvent of the Liouville operator. This purely canonical approach systematizes the series and leads to an explicit expression for the Deprit generator in any order of the perturbation theory: , where is the partial pseudoinverse of the perturbed Liouville operator. The corresponding Kato series provides a reasonably effective computational algorithm. The canonical connection of the perturbed and unperturbed averaging operators allows describing ambiguities in the generator and transformed Hamiltonian, while Gustavson integrals turn out to be insensitive to the normalization style. We use nonperturbative examples for illustration.

  3. Surface determination through atomically resolved secondary-electron imaging

    DOE PAGES

    Ciston, J.; Brown, H. G.; D’Alfonso, A. J.; ...

    2015-06-17

    We report that unique determination of the atomic structure of technologically relevant surfaces is often limited by both a need for homogeneous crystals and ambiguity of registration between the surface and bulk. Atomically resolved secondary-electron imaging is extremely sensitive to this registration and is compatible with faceted nanomaterials, but has not been previously utilized for surface structure determination. Here we show a detailed experimental atomic-resolution secondary-electron microscopy analysis of the c(6 x 2) reconstruction on strontium titanate (001) coupled with careful simulation of secondary-electron images, density functional theory calculations and surface monolayer-sensitive aberration-corrected plan-view high-resolution transmission electron microscopy. Our workmore » reveals several unexpected findings, including an amended registry of the surface on the bulk and strontium atoms with unusual seven-fold coordination within a typically high surface coverage of square pyramidal TiO 5 units. Lastly, dielectric screening is found to play a critical role in attenuating secondary-electron generation processes from valence orbitals.« less

  4. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  5. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  6. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  7. Real-Time Single-Frequency GPS/MEMS-IMU Attitude Determination of Lightweight UAVs

    PubMed Central

    Eling, Christian; Klingbeil, Lasse; Kuhlmann, Heiner

    2015-01-01

    In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5∘) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05∘ for the roll and the pitch angle and 0.2∘ for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases. PMID:26501281

  8. Rocking or Rolling – Perception of Ambiguous Motion after Returning from Space

    PubMed Central

    Clément, Gilles; Wood, Scott J.

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1–2 days. During dynamic linear acceleration (0.15–0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore–aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions. PMID:25354042

  9. Rocking or rolling--perception of ambiguous motion after returning from space.

    PubMed

    Clément, Gilles; Wood, Scott J

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1-2 days. During dynamic linear acceleration (0.15-0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore-aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.

  10. Characterizing the chromosomes of the platypus (Ornithorhynchus anatinus).

    PubMed

    McMillan, Daniel; Miethke, Pat; Alsop, Amber E; Rens, Willem; O'Brien, Patricia; Trifonov, Vladimir; Veyrunes, Frederic; Schatzkamer, Kyriena; Kremitzki, Colin L; Graves, Tina; Warren, Wesley; Grützner, Frank; Ferguson-Smith, Malcolm A; Graves, Jennifer A Marshall

    2007-01-01

    Like the unique platypus itself, the platypus genome is extraordinary because of its complex sex chromosome system, and is controversial because of difficulties in identification of small autosomes and sex chromosomes. A 6-fold shotgun sequence of the platypus genome is now available and is being assembled with the help of physical mapping. It is therefore essential to characterize the chromosomes and resolve the ambiguities and inconsistencies in identifying autosomes and sex chromosomes. We have used chromosome paints and DAPI banding to identify and classify pairs of autosomes and sex chromosomes. We have established an agreed nomenclature and identified anchor BAC clones for each chromosome that will ensure unambiguous gene localizations.

  11. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique

    ERIC Educational Resources Information Center

    Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.

    2005-01-01

    A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…

  12. Deconvolution of Energy Spectra in the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Batkov, K. E.; Panov, A. D.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Chang, J.; Christl, M.; Fazley, A. R.; Ganel, O.; Gunasigha, R. M.; hide

    2005-01-01

    The Advanced Thin Ionization Calorimeter (ATIC) balloon-borne experiment is designed to perform cosmic- ray elemental spectra measurements from below 100 GeV up to tens TeV for nuclei from hydrogen to iron. The instrument is composed of a silicon matrix detector followed by a carbon target, interleaved with scintillator tracking layers, and a segmented BGO calorimeter composed of 320 individual crystals totalling 18 radiation lengths, used to determine the particle energy. The technique for deconvolution of the energy spectra measured in the thin calorimeter is based on detailed simulations of the response of the ATIC instrument to different cosmic ray nuclei over a wide energy range. The method of deconvolution is described and energy spectrum of carbon obtained by this technique is presented.

  13. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  14. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  15. Deconvolution of time series in the laboratory

    NASA Astrophysics Data System (ADS)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  16. A Hyperspectral View of the Crab Nebula

    NASA Astrophysics Data System (ADS)

    Charlebois, M.; Drissen, L.; Bernier, A.-P.; Grandmont, F.; Binette, L.

    2010-05-01

    We have obtained spatially resolved spectra of the Crab nebula in the spectral ranges 450-520 nm and 650-680 nm, encompassing the Hβ, [O III] λ4959, λ5007, Hα, [N II] λ6548, λ6584, and [S II] λ6717, λ6731 emission lines, with the imaging Fourier transform spectrometer SpIOMM at the Observatoire du Mont-Mégantic's 1.6 m telescope. We first compare our data with published observations obtained either from a Fabry-Perot interferometer or from a long-slit spectrograph. Using a spectral deconvolution technique similar to the one developed by Čadež et al., we identify and resolve multiple emission lines separated by large Doppler shifts and contained within the rapidly expanding filamentary structure of the Crab. This allows us to measure important line ratios, such as [N II]/Hα, [S II]/Hα, and [S II] λ6717 /[S II] λ6731 of individual filaments, providing a new insight on the SE-NW asymmetry in the Crab. From our analysis of the spatial distribution of the electronic density and of the respective shocked versus photoionized gas components, we deduce that the skin-less NW region must have evolved faster than the rest of the nebula. Assuming a very simple expansion model for the ejecta material, our data provide us with a complete tridimensional view of the Crab.

  17. Component resolved bleaching study in natural calcium fluoride using CW-OSL, LM-OSL and residual TL glow curves after bleaching.

    PubMed

    Angeli, Vasiliki; Polymeris, George S; Sfampa, Ioanna K; Tsirliganis, Nestor C; Kitis, George

    2017-04-01

    Natural calcium fluoride has been commonly used as thermoluminescence (TL) dosimeter due to its high luminescence intensity. The aim of this work includes attempting a correlation between specific TL glow curves after bleaching and components of linearly modulated optically stimulated luminescence (LM-OSL) as well as continuous wave OSL (CW-OSL). A component resolved analysis was applied to both integrated intensity of the RTL glow curves and all OSL decay curves, by using a Computerized Glow-Curve De-convolution (CGCD) procedure. All CW-OSL and LM-OSL components are correlated to the decay components of the integrated RTL signal, apart from two RTL components which cannot be directly correlated with either LM-OSL or CW-OSL component. The unique, stringent criterion for this correlation deals with the value of the decay constant λ of each bleaching component. There is only one, unique bleaching component present in all three luminescence entities which were the subject of the present study, indicating that each TL trap yields at least three different bleaching components; different TL traps can indicate bleaching components with similar values. According to the data of the present work each RTL bleaching component receives electrons from at least two peaks. The results of the present study strongly suggest that the traps that contribute to TL and OSL are the same. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Cation profiling of passive films on stainless steel formed in sulphuric and acetic acid by deconvolution of angle-resolved X-ray photoelectron spectra

    NASA Astrophysics Data System (ADS)

    Högström, Jonas; Fredriksson, Wendy; Edstrom, Kristina; Björefors, Fredrik; Nyholm, Leif; Olsson, Claes-Olof A.

    2013-11-01

    An approach for determining depth gradients of metal-ion concentrations in passive films on stainless steel using angle-resolved X-ray photoelectron spectroscopy (ARXPS) is described. The iterative method, which is based on analyses of the oxidised metal peaks, provides increased precision and hence allows faster ARXPS measurements to be carried out. The method was used to determine the concentration depth profiles for molybdenum, iron and chromium in passive films on 316L/EN 1.4432 stainless steel samples oxidised in 0.5 M H2SO4 and acetic acid diluted with 0.02 M Na2B4O7 · 10H2O and 1 M H2O, respectively. The molybdenum concentration in the film is pin-pointed to the oxide/metal interface and the films also contained an iron-ion-enriched surface layer and a chromium-ion-dominated middle layer. Although films of similar composition and thickness (i.e., about 2 nm) were formed in the two electrolytes, the corrosion currents were found to be three orders of magnitude larger in the acetic acid solution. The differences in the layer composition, found for the two electrolytes as well as different oxidation conditions, can be explained based on the oxidation potentials of the metals and the dissolution rates of the different metal ions.

  19. Law of the sea, the continental shelf, and marine research

    USGS Publications Warehouse

    Hutchinson, Deborah R.; Rowland, Robert W.

    2007-01-01

    The question of the amount of seabed to which a coastal nation is entitled is addressed in the United Nations Convention on the Law of the Sea (UNCLOS). This treaty, ratified by 153 nations and in force since 1994, specifies national obligations, rights, and jurisdiction in the oceans, and it allows nations a continental shelf out to at least 200 nautical miles or to a maritime boundary. Article 76 (A76) of the convention enables coastal nations to establish their continental shelves beyond 200 nautical miles and therefore to control, among other things, access for scientific research and the use of seabed resources that would otherwise be considered to lie beyond national jurisdiction. To date, seven submissions for extended continental shelves (ECS) have been filed under UNCLOS (Table 1). These submissions have begun to define the ambiguities in A76. How these ambiguities are resolved into final ECS boundaries will probably set important precedents guiding the future delimitation of the ECS by the United States, which has not ratified the convention, and other coastal nations. This report uses examples from the first three submissions—by the Russian Federation, Brazil, and Australia—to identify outstanding issues encountered in applying A76 to ECS delimitation.

  20. Extracting numeric measurements and temporal coordinates from Japanese radiological reports

    NASA Astrophysics Data System (ADS)

    Imai, Takeshi; Onogi, Yuzo

    2004-04-01

    Medical records are written mainly, in natural language. The focus of this study is narrative radiological reports written in natural Japanese. These reports cannot be used for advanced retrieval, data mining, and so on, unless they are stored, using a structured format such as DICOM-SR. The goal is to structure narrative reports progressively, using natural language processing (NLP). Structure has many different levels, for example, DICOM-SR has three established levels -- basic text, enhanced and comprehensive. At the enhanced level, it is necessary to use numerical measurements and spatial & temporal coordinates. In this study, the wording used in the reports was first standardized, dictionaries were organized, and morphological analysis performed. Next, numerical measurements and temporal coordinates were extracted, and the objects to which they referred, analyzed. 10,000 CT and MR reports were separated into 82,122 sentences, and 34,269 of the 36,444 numerical descriptions were tagged. Periods, slashes, hyphens, and parentheses are ambiguously used in the description of enumerated lists, dates, image numbers, and anatomical names, as well as at the end of sentences; to resolve this ambiguity, descriptions were processed, according to the order -- date, size, unit, enumerated list, and abbreviation -- then, the tagged reports were separated into sentences.

  1. Johnny Depp, Reconsidered: How Category-Relative Processing Fluency Determines the Appeal of Gender Ambiguity

    PubMed Central

    Owen, Helen E.; Halberstadt, Jamin; Carr, Evan W.; Winkielman, Piotr

    2016-01-01

    Individuals that combine features of both genders–gender blends–are sometimes appealing and sometimes not. Heretofore, this difference was explained entirely in terms of sexual selection. In contrast, we propose that part of individuals’ preference for gender blends is due to the cognitive effort required to classify them, and that such effort depends on the context in which a blend is judged. In two studies, participants judged the attractiveness of male-female morphs. Participants did so after classifying each face in terms of its gender, which was selectively more effortful for gender blends, or classifying faces on a gender-irrelevant dimension, which was equally effortful for gender blends. In both studies, gender blends were disliked when, and only when, the faces were first classified by gender, despite an overall preference for feminine features in all conditions. Critically, the preferences were mediated by the effort of stimulus classification. The results suggest that the variation in attractiveness of gender-ambiguous faces may derive from context-dependent requirements to determine gender membership. More generally, the results show that the difficulty of resolving social category membership–not just attitudes toward a social category–feed into perceivers’ overall evaluations toward category members. PMID:26845341

  2. Johnny Depp, Reconsidered: How Category-Relative Processing Fluency Determines the Appeal of Gender Ambiguity.

    PubMed

    Owen, Helen E; Halberstadt, Jamin; Carr, Evan W; Winkielman, Piotr

    2016-01-01

    Individuals that combine features of both genders-gender blends-are sometimes appealing and sometimes not. Heretofore, this difference was explained entirely in terms of sexual selection. In contrast, we propose that part of individuals' preference for gender blends is due to the cognitive effort required to classify them, and that such effort depends on the context in which a blend is judged. In two studies, participants judged the attractiveness of male-female morphs. Participants did so after classifying each face in terms of its gender, which was selectively more effortful for gender blends, or classifying faces on a gender-irrelevant dimension, which was equally effortful for gender blends. In both studies, gender blends were disliked when, and only when, the faces were first classified by gender, despite an overall preference for feminine features in all conditions. Critically, the preferences were mediated by the effort of stimulus classification. The results suggest that the variation in attractiveness of gender-ambiguous faces may derive from context-dependent requirements to determine gender membership. More generally, the results show that the difficulty of resolving social category membership-not just attitudes toward a social category-feed into perceivers' overall evaluations toward category members.

  3. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  4. Fast Fourier-based deconvolution for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming

    2018-07-01

    Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.

  5. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  6. VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)

    NASA Astrophysics Data System (ADS)

    Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.

    2015-05-01

    This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).

  7. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  8. A method to measure the presampling MTF in digital radiography using Wiener deconvolution

    NASA Astrophysics Data System (ADS)

    Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui

    2013-03-01

    We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.

  9. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  10. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  11. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  12. Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles

    NASA Astrophysics Data System (ADS)

    Zekavat, Behrooz; Solouki, Touradj

    2012-11-01

    We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.

  13. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  14. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  16. Annually resolved southern hemisphere volcanic history from two Antarctic ice cores

    NASA Astrophysics Data System (ADS)

    Cole-Dai, Jihong; Mosley-Thompson, Ellen; Thompson, Lonnie G.

    1997-07-01

    The continuous sulfate analysis of two Antarctic ice cores, one from the Antarctic Peninsula region and one from West Antarctica, provides an annually resolved proxy history of southern semisphere volcanism since early in the 15th century. The dating is accurate within ±3 years due to the high rate of snow accumulation at both core sites and the small sample sizes used for analysis. The two sulfate records are consistent with each other. A systematic and objective method of separating outstanding sulfate events from the background sulfate flux is proposed and used to identify all volcanic signals. The resulting volcanic chronology covering 1417-1989 A.D. resolves temporal ambiguities about several recently discovered events. A number of previously unknown, moderate eruptions during late 1600s are uncovered in this chronology. The eruption of Tambora (1815) and the recently discovered eruption of Kuwae (1453) in the tropical South Pacific injected the greatest amount of sulfur dioxide into the southern hemisphere stratosphere during the last half millennium. A technique for comparing the magnitude of volcanic events preserved within different ice cores is developed using normalized sulfate flux. For the same eruptions the variability of the volcanic sulfate flux between the cores is within ±20% of the sulfate flux from the Tambora eruption.

  17. Measuring higher order ambiguity preferences.

    PubMed

    Baillon, Aurélien; Schlesinger, Harris; van de Kuilen, Gijs

    2018-01-01

    We report the results from an experiment designed to measure attitudes towards ambiguity beyond ambiguity aversion. In particular, we implement recently-proposed model-free preference conditions of ambiguity prudence and ambiguity temperance. Ambiguity prudence has been shown to play an important role in precautionary behavior and the mere presence of ambiguity averse agents in markets. We observe that the majority of individuals' decisions are consistent with ambiguity aversion, ambiguity prudence and ambiguity temperance. This finding confirms the prediction of many popular (specifications of) ambiguity models and has important implications for models of prevention behavior.

  18. Chromatographic fingerprinting through chemometric techniques for herbal slimming pills: A way of adulterant identification.

    PubMed

    Shekari, Nafiseh; Vosough, Maryam; Tabar Heidar, Kourosh

    2018-05-01

    In the current study, gas chromatography-mass spectrometry (GC-MS) fingerprinting of herbal slimming pills assisted by chemometric methods has been presented. Deconvolution of two-way chromatographic signals of nine herbal slimming pills into pure chromatographic and spectral patterns was performed. The peak clusters were resolved using multivariate curve resolution-alternating least squares (MCR-ALS) by employing appropriate constraints. It was revealed that more useful chemical information about the composition of the slimming pills can be obtained by employing sophisticated GC-MS method coupled with proper chemometric tools yielding the extended number of identified constituents. The thorough fingerprinting of the complex mixtures proved the presence of some toxic or carcinogen components, such as toluene, furfural, furfuryl alcohol, styrene, itaconic anhydride, citraconic anhydride, trimethyl phosphate, phenol, pyrocatechol, p-propenylanisole and pyrogallol. In addition, some samples were shown to be adulterated with undeclared ingredients, including stimulants, anorexiant and laxatives such as phenolphthalein, amfepramone, caffeine and sibutramine. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. The Role of the Substrate on Photophysical Properties of Highly Ordered 15R-SiC Thin Films

    NASA Astrophysics Data System (ADS)

    Mourya, Satyendra; Jaiswal, Jyoti; Malik, Gaurav; Kumar, Brijesh; Chandra, Ramesh

    2018-06-01

    We report on the structural optimization and photophysical properties of in situ RF-sputtered single crystalline 15R-SiC thin films deposited on various substrates (ZrO2, MgO, SiC, and Si). The role of the substrates on the structural, electronic, and photodynamic behavior of the grown films have been demonstrated using x-ray diffraction, photoluminescence (PL) and time-resolved photoluminescence spectroscopy. The appropriate bonding order and the presence of native oxide on the surface of the grown samples are confirmed by x-ray photoelectron spectroscopy measurement. A deep-blue PL emission has been observed corresponding to the Si-centered defects occurring in the native oxide. Deconvolution of the PL spectra manifested two decay mechanisms corresponding to the radiative recombination. The PL intensity and carrier lifetime were found to be substrate- dependent which may be ascribed to the variation in the trap-density of the films grown on different substrates.

  20. Data reduction of isotope-resolved LC-MS spectra.

    PubMed

    Du, Peicheng; Sudha, Rajagopalan; Prystowsky, Michael B; Angeletti, Ruth Hogue

    2007-06-01

    Data reduction of liquid chromatography-mass spectrometry (LC-MS) spectra can be a challenge due to the inherent complexity of biological samples, noise and non-flat baseline. We present a new algorithm, LCMS-2D, for reliable data reduction of LC-MS proteomics data. LCMS-2D can reliably reduce LC-MS spectra with multiple scans to a list of elution peaks, and subsequently to a list of peptide masses. It is capable of noise removal, and deconvoluting peaks that overlap in m/z, in retention time, or both, by using a novel iterative peak-picking step, a 'rescue' step, and a modified variable selection method. LCMS-2D performs well with three sets of annotated LC-MS spectra, yielding results that are better than those from PepList, msInspect and the vendor software BioAnalyst. The software LCMS-2D is available under the GNU general public license from http://www.bioc.aecom.yu.edu/labs/angellab/as a standalone C program running on LINUX.

  1. Super-resolution structured illumination in optically thick specimens without fluorescent tagging

    NASA Astrophysics Data System (ADS)

    Hoffman, Zachary R.; DiMarzio, Charles A.

    2017-11-01

    This research extends the work of Hoffman et al. to provide both sectioning and super-resolution using random patterns within thick specimens. Two methods of processing structured illumination in reflectance have been developed without the need for a priori knowledge of either the optical system or the modulation patterns. We explore the use of two deconvolution algorithms that assume either Gaussian or sparse priors. This paper will show that while both methods accomplish their intended objective, the sparse priors method provides superior resolution and contrast against all tested targets, providing anywhere from ˜1.6× to ˜2× resolution enhancement. The methods developed here can reasonably be implemented to work without a priori knowledge about the patterns or point spread function. Further, all experiments are run using an incoherent light source, unknown random modulation patterns, and without the use of fluorescent tagging. These additional modifications are challenging, but the generalization of these methods makes them prime candidates for clinical application, providing super-resolved noninvasive sectioning in vivo.

  2. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  3. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    NASA Astrophysics Data System (ADS)

    Navarro, Jorge

    The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.

  4. Identification and deconvolution of carbohydrates with gas chromatography-vacuum ultraviolet spectroscopy.

    PubMed

    Schenk, Jamie; Nagy, Gabe; Pohl, Nicola L B; Leghissa, Allegra; Smuts, Jonathan; Schug, Kevin A

    2017-09-01

    Methodology for qualitative and quantitative determination of carbohydrates with gas chromatography coupled to vacuum ultraviolet detection (GC-VUV) is presented. Saccharides have been intently studied and are commonly analyzed by gas chromatography-mass spectrometry (GC-MS), but not always effectively. This can be attributed to their high degree of structural complexity: α/β anomers from their axial/equatorial hydroxyl group positioning at the C1-OH and flexible ring structures that lead to the open chain, five-membered ring furanose, and six-membered ring pyranose configurations. This complexity can result in convoluted chromatograms, ambiguous fragmentation patterns and, ultimately, analyte misidentification. In this study, mono-, di, and tri-saccharides were derivatized by two different methods-permethylation and oximation/pertrimethylsilylation-and analyzed by GC-VUV. These two derivatization methods were then compared for their efficiency, ease of use, and robustness. Permethylation proved to be a useful technique for the analysis of ketopentoses and pharmaceuticals soluble in dimethyl sulfoxide (DMSO), while the oximation/pertrimethylsilylation method prevailed as the more promising, overall, derivatization method. VUV spectra have been shown to be distinct and allow for efficient differentiation of isomeric species such as ketopentoses and reducing versus non-reducing sugars. In addition to identification, pharmaceutical samples containing several compounds were derivatized and analyzed for their sugar content with the GC-VUV technique to provide data for qualitative analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A localized Richardson-Lucy algorithm for fiber orientation estimation in high angular resolution diffusion imaging.

    PubMed

    Liu, Xiaozheng; Yuan, Zhenming; Guo, Zhongwei; Xu, Dongrong

    2015-05-01

    Diffusion tensor imaging is widely used for studying neural fiber trajectories in white matter and for quantifying changes in tissue using diffusion properties at each voxel in the brain. To better model the nature of crossing fibers within complex architectures, rather than using a simplified tensor model that assumes only a single fiber direction at each image voxel, a model mixing multiple diffusion tensors is used to profile diffusion signals from high angular resolution diffusion imaging (HARDI) data. Based on the HARDI signal and a multiple tensors model, spherical deconvolution methods have been developed to overcome the limitations of the diffusion tensor model when resolving crossing fibers. The Richardson-Lucy algorithm is a popular spherical deconvolution method used in previous work. However, it is based on a Gaussian distribution, while HARDI data are always very noisy, and the distribution of HARDI data follows a Rician distribution. This current work aims to present a novel solution to address these issues. By simultaneously considering both the Rician bias and neighbor correlation in HARDI data, the authors propose a localized Richardson-Lucy (LRL) algorithm to estimate fiber orientations for HARDI data. The proposed method can simultaneously reduce noise and correct the Rician bias. Mean angular error (MAE) between the estimated Fiber orientation distribution (FOD) field and the reference FOD field was computed to examine whether the proposed LRL algorithm offered any advantage over the conventional RL algorithm at various levels of noise. Normalized mean squared error (NMSE) was also computed to measure the similarity between the true FOD field and the estimated FOD filed. For MAE comparisons, the proposed LRL approach obtained the best results in most of the cases at different levels of SNR and b-values. For NMSE comparisons, the proposed LRL approach obtained the best results in most of the cases at b-value = 3000 s/mm(2), which is the recommended schema for HARDI data acquisition. In addition, the FOD fields estimated by the proposed LRL approach in regions of fiber crossing regions using real data sets also showed similar fiber structures which agreed with common acknowledge in these regions. The novel spherical deconvolution method for improved accuracy in investigating crossing fibers can simultaneously reduce noise and correct Rician bias. With the noise smoothed and bias corrected, this algorithm is especially suitable for estimation of fiber orientations in HARDI data. Experimental results using both synthetic and real imaging data demonstrated the success and effectiveness of the proposed LRL algorithm.

  6. Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field.

    PubMed

    Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim

    2015-12-01

    We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model's estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences.

  7. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  8. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  9. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  10. Shape, size and multiplicity of main-belt asteroids I. Keck Adaptive Optics survey.

    PubMed

    Marchis, F; Kaasalainen, M; Hom, E F Y; Berthier, J; Enriquez, J; Hestroffer, D; Le Mignant, D; de Pater, I

    2006-11-01

    This paper presents results from a high spatial resolution survey of 33 main-belt asteroids with diameters >40 km using the Keck II Adaptive Optics (AO) facility. Five of these (45 Eugenia, 87 Sylvia, 107 Camilla, 121 Hermione, 130 Elektra) were confirmed to have satellite. Assuming the same albedo as the primary, these moonlets are relatively small (∼5% of the primary size) suggesting that they are fragments captured after a disruptive collision of a parent body or captured ejecta due to an impact. For each asteroid, we have estimated the minimum size of a moonlet that can positively detected within the Hill sphere of the system by estimating and modeling a 2-σ detection profile: in average on the data set, a moonlet located at 2/100 × R(Hill) (1/4 × R(Hill)) with a diameter larger than 6 km (4 km) would have been unambiguously seen. The apparent size and shape of each asteroid was estimated after deconvolution using a new algorithm called AIDA. The mean diameter for the majority of asteroids is in good agreement with IRAS radiometric measurements, though for asteroids with a D < 200 km, it is underestimated on average by 6-8%. Most asteroids had a size ratio that was very close to those determined by lightcurve measurements. One observation of 104 Klymene suggests it has a bifurcated shape. The bi-lobed shape of 121 Hermione described in Marchis et al. [Marchis, F., Hestroffer, D., Descamps, P., Berthier, J., Laver, C., de Pater, I., 2005c. Icarus 178, 450-464] was confirmed after deconvolution. The ratio of contact binaries in our survey, which is limited to asteroids larger than 40 km, is surprisingly high (∼6%), suggesting that a non-single configuration is common in the main-belt. Several asteroids have been analyzed with lightcurve inversions. We compared lightcurve inversion models for plane-of-sky predictions with the observed images (9 Metis, 52 Europa, 87 Sylvia, 130 Elektra, 192 Nausikaa, and 423 Diotima, 511 Davida). The AO images allowed us to determine a unique photometric mirror pole solution, which is normally ambiguous for asteroids moving close to the plane of the ecliptic (e.g., 192 Nausikaa and 52 Europa). The photometric inversion models agree well with the AO images, thus confirming the validity of both the lightcurve inversion method and the AO image reduction technique.

  11. Image restoration and superresolution as probes of small scale far-IR structure in star forming regions

    NASA Technical Reports Server (NTRS)

    Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.

    1986-01-01

    Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.

  12. Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field

    PubMed Central

    Dimmock, Stephen G.; Kouwenberg, Roy; Mitchell, Olivia S.; Peijnenburg, Kim

    2016-01-01

    We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model’s estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences. PMID:26924890

  13. Point-particle effective field theory I: classical renormalization and the inverse-square potential

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Hayman, Peter; Williams, M.; Zalavári, László

    2017-04-01

    Singular potentials (the inverse-square potential, for example) arise in many situations and their quantum treatment leads to well-known ambiguities in choosing boundary conditions for the wave-function at the position of the potential's singularity. These ambiguities are usually resolved by developing a self-adjoint extension of the original prob-lem; a non-unique procedure that leaves undetermined which extension should apply in specific physical systems. We take the guesswork out of this picture by using techniques of effective field theory to derive the required boundary conditions at the origin in terms of the effective point-particle action describing the physics of the source. In this picture ambiguities in boundary conditions boil down to the allowed choices for the source action, but casting them in terms of an action provides a physical criterion for their determination. The resulting extension is self-adjoint if the source action is real (and involves no new degrees of freedom), and not otherwise (as can also happen for reasonable systems). We show how this effective-field picture provides a simple framework for understanding well-known renormalization effects that arise in these systems, including how renormalization-group techniques can resum non-perturbative interactions that often arise, particularly for non-relativistic applications. In particular we argue why the low-energy effective theory tends to produce a universal RG flow of this type and describe how this can lead to the phenomenon of reaction catalysis, in which physical quantities (like scattering cross sections) can sometimes be surprisingly large compared to the underlying scales of the source in question. We comment in passing on the possible relevance of these observations to the phenomenon of the catalysis of baryon-number violation by scattering from magnetic monopoles.

  14. Detection of Y chromosome sequences in a 45,X/46,XXq - patient by Southern blot analysis of PCR-amplified DNA and fluorescent in situ hybridization (FISH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocova, M.; Siegel, S.F.; Wenger, S.L.

    1995-02-13

    In some cases of gonadal dysgenesis, cytogenetic analysis seems to be discordant with the phenotype of the patients. We have applied techniques such as Southern blot analysis and fluorescent in situ hybridization (FISH) to resolve the phenotype/genotype discrepancy in a patient with ambiguous genitalia in whom the peripheral blood karotype was 45,X. Gonadectomy at age 7 months showed the gonadal tissue to be prepubertal testis on the left side and a streak gonad on the right. The karyotype obtained from the left gonad was 45,X/46,XXq- and that from the right gonad was 45,X. Three different techniques, PCR amplification, FISH, andmore » chromosome painting for X and Y chromosomes, confirmed the presence of Y chromosome sequences. Five different tissues were evaluated. The highest percentage of Y chromosome positive cells were detected in the left gonad, followed by the peripheral blood lymphocytes, skin fibroblasts, and buccal mucosa. No Y chromosomal material could be identified in the right gonad. Since the Xq- chromosome is present in the left gonad (testis), it is likely that the Xq- contains Y chromosomal material. Sophisticated analysis in this patient showed that she has at least 2 cell lines, one of which contains Y chromosomal material. These techniques elucidated the molecular basis of the genital ambiguity for this patient. When Y chromosome sequences are present in patients with Ullrich-Turner syndrome or gonadal dysgenesis, the risk for gonadal malignancy is significantly increased. Hence, molecular diagnostic methods to ascertain for the presence of Y chromosome sequences may expedite the evaluation of patients with the ambiguous genitalia. 21 refs., 4 figs., 2 tabs.« less

  15. Magnetic Field, Density Current, and Lorentz Force Full Vector Maps of the NOAA 10808 Double Sunspot: Evidence of Strong Horizontal Current Flows in the Penumbra

    NASA Astrophysics Data System (ADS)

    Bommier, V.; Landi Degl'Innocenti, E.; Schmieder, B.; Gelly, B.

    2011-04-01

    The context is that of the so-called “fundamental ambiguity” (also azimuth ambiguity, or 180° ambiguity) in magnetic field vector measurements: two field vectors symmetrical with respect to the line-of-sight have the same polarimetric signature, so that they cannot be discriminated. We propose a method to solve this ambiguity by applying the “simulated annealing” algorithm to the minimization of the field divergence, added to the longitudinal current absolute value, the line-of-sight derivative of the magnetic field being inferred by the interpretation of the Zeeman effect observed by spectropolarimetry in two lines formed at different depths. We find that the line pair Fe I λ 6301.5 and Fe I λ 6302.5 is appropriate for this purpose. We treat the example case of the δ-spot of NOAA 10808 observed on 13 September 2005 between 14:25 and 15:25 UT with the THEMIS telescope. Besides the magnetic field resolved map, the electric current density vector map is also obtained. A strong horizontal current density flow is found surrounding each spot inside its penumbra, associated to a non-zero Lorentz force centripetal with respect to the spot center (i.e., oriented towards the spot center). The current wrapping direction is found to depend on the spot polarity: clockwise for the positive polarity, counterclockwise for the negative one. This analysis is made possible thanks to the UNNOFIT2 Milne-Eddington inversion code, where the usual theory is generalized to the case of a line Fe I λ 6301.5) that is not a normal Zeeman triplet line (like Fe I λ 6302.5).

  16. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    PubMed

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.

    PubMed

    Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C

    2014-09-15

    We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.

  18. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  19. Punch stretching process monitoring using acoustic emission signal analysis. II - Application of frequency domain deconvolution

    NASA Technical Reports Server (NTRS)

    Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.

    1987-01-01

    The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.

  20. Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques

    PubMed Central

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896

  1. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    PubMed

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  2. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Studies of Tidal and Planetary Wave Variability in the Middle Atmosphere using UARS and Correlative MF Radar Data

    NASA Technical Reports Server (NTRS)

    Fritts, David C.

    1996-01-01

    The goals of this research effort have been to use MF radar and UARS/HRDI wind measurements for correlative studies of large-scale atmospheric dynamics, focusing specifically on the tidal and various planetary wave structures occurring in the middle atmosphere. We believed that the two data sets together would provide the potential for much more comprehensive studies than either by itself, since they jointly would allow the removal of ambiguities in wave structure that are difficult to resolve with either data set alone. The joint data were to be used for studies of wave structure, variability, and the coupling of these motions to mean and higher-frequency motions.

  4. Superconducting transmission line particle detector

    DOEpatents

    Gray, K.E.

    1988-07-28

    A microvertex particle detector for use in a high energy physic collider including a plurality of parallel superconducting thin film strips separated from a superconducting ground plane by an insulating layer to form a plurality of superconducting waveguides. The microvertex particle detector indicates passage of a charged subatomic particle by measuring a voltage pulse measured across a superconducting waveguide caused by the transition of the superconducting thin film strip from a superconducting to a non- superconducting state in response to the passage of a charged particle. A plurality of superconducting thin film strips in two orthogonal planes plus the slow electromagnetic wave propagating in a superconducting transmission line are used to resolve N/sup 2/ ambiguity of charged particle events. 6 figs.

  5. Superconducting transmission line particle detector

    DOEpatents

    Gray, Kenneth E.

    1989-01-01

    A microvertex particle detector for use in a high energy physic collider including a plurality of parallel superconducting thin film strips separated from a superconducting ground plane by an insulating layer to form a plurality of superconducting waveguides. The microvertex particle detector indicates passage of a charged subatomic particle by measuring a voltage pulse measured across a superconducting waveguide caused by the transition of the superconducting thin film strip from a superconducting to a non-superconducting state in response to the passage of a charged particle. A plurality of superconducting thin film strips in two orthogonal planes plus the slow electromagnetic wave propogating in a superconducting transmission line are used to resolve N.sup.2 ambiguity of charged particle events.

  6. Single-molecule dilution and multiple displacement amplification for molecular haplotyping.

    PubMed

    Paul, Philip; Apgar, Josh

    2005-04-01

    Separate haploid analysis is frequently required for heterozygous genotyping to resolve phase ambiguity or confirm allelic sequence. We demonstrate a technique of single-molecule dilution followed by multiple strand displacement amplification to haplotype polymorphic alleles. Dilution of DNA to haploid equivalency, or a single molecule, is a simple method for separating di-allelic DNA. Strand displacement amplification is a robust method for non-specific DNA expansion that employs random hexamers and phage polymerase Phi29 for double-stranded DNA displacement and primer extension, resulting in high processivity and exceptional product length. Single-molecule dilution was followed by strand displacement amplification to expand separated alleles to microgram quantities of DNA for more efficient haplotype analysis of heterozygous genes.

  7. Optical stretching as a tool to investigate the mechanical properties of lipid bilayers.

    PubMed

    Solmaz, Mehmet E; Sankhagowit, Shalene; Biswas, Roshni; Mejia, Camilo A; Povinelli, Michelle L; Malmstadt, Noah

    2013-10-07

    Measurements of lipid bilayer bending modulus by various techniques produce widely divergent results. We attempt to resolve some of this ambiguity by measuring bending modulus in a system that can rapidly process large numbers of samples, yielding population statistics. This system is based on optical stretching of giant unilamellar vesicles (GUVs) in a microfluidic dual-beam optical trap (DBOT). The microfluidic DBOT system is used here to measure three populations of GUVs with distinct lipid compositions. We find that gel-phase membranes are significantly stiffer than liquid-phase membranes, consistent with previous reports. We also find that the addition of cholesterol does not alter the bending modulus of membranes composed of a monounsaturated phospholipid.

  8. On hydromagnetic oscillations in a rotating cavity.

    NASA Technical Reports Server (NTRS)

    Gans, R. F.

    1971-01-01

    Time-dependent hydromagnetic phenomena in a rotating spherical cavity are investigated in the framework of an interior boundary-layer expansion. The first type of wave is a modification of the hydrodynamic inertial wave, the second is a pseudo-geostrophic wave and is involved in spinup, and the third is related to the MAC waves of Braginskii (1967). It is shown that the MAC waves must satisfy more than the usual normal boundary conditions, and that reference must be made to the boundary-layer solution to resolve the ambiguity regarding which conditions are to be taken. The boundary-layer structure is investigated in detail to display the interactions between applied field, viscosity, electrical conductivity, frequency and latitu de.

  9. Attitude motion of a non-attitude-controlled cylindrical satellite

    NASA Technical Reports Server (NTRS)

    Wilkinson, C. K.

    1988-01-01

    In 1985, two non-attitude-controlled satellites were each placed in a low earth orbit by the Scout Launch Vehicle. The satellites were cylindrical in shape and contained reservoirs of hydrazine fuel. Three-axis magnetometer measurements, telemetered in real time, were used to derive the attitude motion of each satellite. Algorithms are generated to deduce possible orientations (and magnitudes) of each vehicle's angular momentum for each telemetry contact. To resolve ambiguities at each contact, a force model was derived to simulate the significant long-term effects of magnetic, gravity gradient, and aerodynamic torques on the angular momentum of the vehicles. The histories of the orientation and magnitude of the angular momentum are illustrated.

  10. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  11. Two-Dimensional Signal Processing and Storage and Theory and Applications of Electromagnetic Measurements.

    DTIC Science & Technology

    1983-06-01

    system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension

  12. Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar Electrokinetic Chromatography Data for the Quantitation of Trinitrotoluene in Mixtures of Other Nitroaromatic Compounds

    DTIC Science & Technology

    2014-02-24

    Suite 600 Washington, DC 20036 NRL/MR/ 6110 --14-9521 Approved for public release; distribution is unlimited. 1Science & Engineering Apprenticeship...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/ 6110 --14-9521 Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar... Engineering Apprenticeship Program American Society for Engineering Education Washington, DC Kevin Johnson Navy Technology Center for Safety and

  13. Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico

    NASA Astrophysics Data System (ADS)

    Chavez-Perez, S.; Vargas-Meleza, L.

    2007-05-01

    We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.

  14. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  15. Data Dependent Peak Model Based Spectrum Deconvolution for Analysis of High Resolution LC-MS Data

    PubMed Central

    2015-01-01

    A data dependent peak model (DDPM) based spectrum deconvolution method was developed for analysis of high resolution LC-MS data. To construct the selected ion chromatogram (XIC), a clustering method, the density based spatial clustering of applications with noise (DBSCAN), is applied to all m/z values of an LC-MS data set to group the m/z values into each XIC. The DBSCAN constructs XICs without the need for a user defined m/z variation window. After the XIC construction, the peaks of molecular ions in each XIC are detected using both the first and the second derivative tests, followed by an optimized chromatographic peak model selection method for peak deconvolution. A total of six chromatographic peak models are considered, including Gaussian, log-normal, Poisson, gamma, exponentially modified Gaussian, and hybrid of exponential and Gaussian models. The abundant nonoverlapping peaks are chosen to find the optimal peak models that are both data- and retention-time-dependent. Analysis of 18 spiked-in LC-MS data demonstrates that the proposed DDPM spectrum deconvolution method outperforms the traditional method. On average, the DDPM approach not only detected 58 more chromatographic peaks from each of the testing LC-MS data but also improved the retention time and peak area 3% and 6%, respectively. PMID:24533635

  16. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  17. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  18. Partitioning of nitroxides in dispersed systems investigated by ultrafiltration, EPR and NMR spectroscopy.

    PubMed

    Krudopp, Heimke; Sönnichsen, Frank D; Steffen-Heins, Anja

    2015-08-15

    The partitioning behavior of paramagnetic nitroxides in dispersed systems can be determined by deconvolution of electron paramagnetic resonance (EPR) spectra giving equivalent results with the validated methods of ultrafiltration techniques (UF) and pulsed-field gradient nuclear magnetic resonance spectroscopy (PFG-NMR). The partitioning behavior of nitroxides with increasing lipophilicity was investigated in anionic, cationic and nonionic micellar systems and 10 wt% o/w emulsions. Apart from EPR spectra deconvolution, the PFG-NMR was used in micellar solutions as a non-destructive approach, while UF based on separation of very small volume of the aqueous phase. As a function of their substituent and lipophilicity, the proportions of nitroxides that were solubilized in the micellar or emulsion interface increased with increasing nitroxide lipophilicity for all emulsifier used. Comparing the different approaches, EPR deconvolution and UF revealed comparable nitroxide proportions that were solubilized in the interfaces. Those proportions were higher than found with PFG-NMR. For PFG-NMR self-diffusion experiments the reduced nitroxides were used revealing a high dynamic of hydroxylamines and emulsifiers. Deconvolution of EPR spectra turned out to be the preferred method for measuring the partitioning behavior of paramagnetic molecules as it enables distinguishing between several populations at their individual solubilization sites. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Extraction of near-surface properties for a lossy layered medium using the propagator matrix

    USGS Publications Warehouse

    Mehta, K.; Snieder, R.; Graizer, V.

    2007-01-01

    Near-surface properties play an important role in advancing earthquake hazard assessment. Other areas where near-surface properties are crucial include civil engineering and detection and delineation of potable groundwater. From an exploration point of view, near-surface properties are needed for wavefield separation and correcting for the local near-receiver structure. It has been shown that these properties can be estimated for a lossless homogeneous medium using the propagator matrix. To estimate the near-surface properties, we apply deconvolution to passive borehole recordings of waves excited by an earthquake. Deconvolution of these incoherent waveforms recorded by the sensors at different depths in the borehole with the recording at the surface results in waves that propagate upwards and downwards along the array. These waves, obtained by deconvolution, can be used to estimate the P- and S-wave velocities near the surface. As opposed to waves obtained by cross-correlation that represent filtered version of the sum of causal and acausal Green's function between the two receivers, the waves obtained by deconvolution represent the elements of the propagator matrix. Finally, we show analytically the extension of the propagator matrix analysis to a lossy layered medium for a special case of normal incidence. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  20. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  1. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  2. Demonstration of Resolving Urban Problems by Applying Smart Technology.

    NASA Astrophysics Data System (ADS)

    Kim, Y.

    2016-12-01

    Recently, movements to seek various alternatives are becoming more active around the world to resolve urban problems related to energy, water, a greenhouse gas, and disaster by utilizing smart technology system. The purpose of this study is to evaluate service verification aimed at demonstration region applied with actual smart technology in order to raise the efficiency of the service and explore solutions for urban problems. This process must be required for resolving urban problems in the future and establishing `integration platform' for sustainable development. The demonstration region selected in this study to evaluate service verification is `Busan' in Korea. Busan adopted 16 services in 4 sections last year and begun demonstration to improve quality of life and resolve urban environment problems. In addition, Busan participated officially in `Global City Teams Challenge (GCTC)' held by National Institute of Standards and Technology (NIST) in USA last year and can be regarded as representative demonstration region in Korea. The result of survey showed that there were practical difficulties as explained below in the demonstration for resolving urban problems by applying smart technology. First, the participation for demonstration was low because citizens were either not aware or did not realize the demonstration. Second, after demonstrating various services at low cost, it resulted in less effect of service demonstration. Third, as functions get fused, it was found that management department, application criteria of technology and its process were ambiguous. In order to increase the efficiency of the demonstration for the rest of period through the result of this study, it is required to draw demand that citizens requires in order to raise public participation. In addition, it needs to focus more on services which are wanted to demonstrate rather than various service demonstrations. Lastly, it is necessary to build integration platform through cooperation between departments and branches. The data collected from various source while conducting service demonstration will provide meaningful suggestion in order to explore solution for resolving urban problems by applying smart technology in the future.

  3. Curtailing the dark side in non-standard neutrino interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coloma, Pilar; Denton, Peter B.; Gonzalez-Garcia, Maria C.

    In presence of non-standard neutrino interactions the neutrino flavor evolution equation is affected by a degeneracy which leads to the so-called LMA-Dark solution. It requires a solar mixing angle in the second octant and implies an ambiguity in the neutrino mass ordering. Non-oscillation experiments are required to break this degeneracy. We perform a combined analysis of data from oscillation experiments with the neutrino scattering experiments CHARM and NuTeV. We find that the degeneracy can be lifted if the non-standard neutrino interactions take place with down quarks, but it remains for up quarks. However, CHARM and NuTeV constraints apply only ifmore » the new interactions take place through mediators not much lighter than the electroweak scale. For light mediators we consider the possibility to resolve the degeneracy by using data from future coherent neutrino-nucleus scattering experiments. Here we find that, for an experiment using a stopped-pion neutrino source, the LMA-Dark degeneracy will either be resolved, or the presence of new interactions in the neutrino sector will be established with high significance.« less

  4. Design considerations and validation of the MSTAR absolute metrology system

    NASA Astrophysics Data System (ADS)

    Peters, Robert D.; Lay, Oliver P.; Dubovitsky, Serge; Burger, Johan; Jeganathan, Muthu

    2004-08-01

    Absolute metrology measures the actual distance between two optical fiducials. A number of methods have been employed, including pulsed time-of-flight, intensity-modulated optical beam, and two-color interferometry. The rms accuracy is currently limited to ~5 microns. Resolving the integer number of wavelengths requires a 1-sigma range accuracy of ~0.1 microns. Closing this gap has a large pay-off: the range (length measurement) accuracy can be increased substantially using the unambiguous optical phase. The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. In this paper, we present recent experiments that use dispersed white light interferometry to independently validate the zero-point of the system. We also describe progress towards reducing the size of optics, and stabilizing the laser wavelength for operation over larger target ranges. MSTAR is a general-purpose tool for conveniently measuring length with much greater accuracy than was previously possible, and has a wide range of possible applications.

  5. Curtailing the dark side in non-standard neutrino interactions

    DOE PAGES

    Coloma, Pilar; Denton, Peter B.; Gonzalez-Garcia, Maria C.; ...

    2017-04-20

    In presence of non-standard neutrino interactions the neutrino flavor evolution equation is affected by a degeneracy which leads to the so-called LMA-Dark solution. It requires a solar mixing angle in the second octant and implies an ambiguity in the neutrino mass ordering. Non-oscillation experiments are required to break this degeneracy. We perform a combined analysis of data from oscillation experiments with the neutrino scattering experiments CHARM and NuTeV. We find that the degeneracy can be lifted if the non-standard neutrino interactions take place with down quarks, but it remains for up quarks. However, CHARM and NuTeV constraints apply only ifmore » the new interactions take place through mediators not much lighter than the electroweak scale. For light mediators we consider the possibility to resolve the degeneracy by using data from future coherent neutrino-nucleus scattering experiments. Here we find that, for an experiment using a stopped-pion neutrino source, the LMA-Dark degeneracy will either be resolved, or the presence of new interactions in the neutrino sector will be established with high significance.« less

  6. The Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Silverburg, Robert

    2009-01-01

    Astronomical studies at infrared wavelengths have dramatically improved our understanding of the universe, and observations with Spitzer, the upcoming Herschel mission, and SOFIA will continue to provide exciting new discoveries. The comparatively low spatial resolution of these missions, however, is insufficient to resolve the physical scales on which mid- to far-infrared emission arises, resulting in source and structure ambiguities that limit our ability to answer key science questions. Interferometry enables high angular resolution at these wavelengths. We have proposed a new high altitude balloon experiment, the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). High altitude operation makes far-infrared (30- 300micron) observations possible, and BETTII's 8-meter baseline provides unprecedented angular resolution (approx. 0.5 arcsec) in this band. BETTII will use a double-Fourier instrument to simultaneously obtain both spatial and spectral information. The spatially resolved spectroscopy provided by BETTII will address key questions about the nature of disks in young cluster stars and active galactic nuclei and the envelopes of evolved stars. BETTII will also lay the groundwork for future space interferometers.

  7. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII): High Angular Resolution Astronomy at Far-Infrared Wavelengths

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen A.

    2008-01-01

    Astronomical studies at infrared wavelengths have dramatically improved our understanding of the universe, and observations with Spitzer, the upcoming Herschel mission. and SOFIA will continue to provide exciting new discoveries. The comparatively low spatial resolution of these missions, however. is insufficient to resolve the physical scales on which mid- to far-infrared emission arises, resulting in source and structure ambiguities that limit our ability to answer key science questions. Interferometry enables high angular resolution at these wavelengths. We have proposed a new high altitude balloon experiment, the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). High altitude operation makes far-infrared (30- 300micron) observations possible, and BETTII's 8-meter baseline provides unprecedented angular resolution (-0.5 arcsec) in this band. BETTII will use a double- Fourier instrument to simultaneously obtain both spatial and spectral informatioT. he spatially resolved spectroscopy provided by BETTII will address key questions about the nature of disks in young cluster stars and active galactic nuclei and the envelopes of evolved stars. BETTII will also lay the groundwork for future space interferometers.

  8. Treatment decisions under ambiguity.

    PubMed

    Berger, Loïc; Bleichrodt, Han; Eeckhoudt, Louis

    2013-05-01

    Many health risks are ambiguous in the sense that reliable and credible information about these risks is unavailable. In health economics, ambiguity is usually handled through sensitivity analysis, which implicitly assumes that people are neutral towards ambiguity. However, empirical evidence suggests that people are averse to ambiguity and react strongly to it. This paper studies the effects of ambiguity aversion on two classical medical decision problems. If there is ambiguity regarding the diagnosis of a patient, ambiguity aversion increases the decision maker's propensity to opt for treatment. On the other hand, in the case of ambiguity regarding the effects of treatment, ambiguity aversion leads to a reduction in the propensity to choose treatment. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Neural Correlates of Decision-Making Under Ambiguity and Conflict.

    PubMed

    Pushkarskaya, Helen; Smithson, Michael; Joseph, Jane E; Corbly, Christine; Levy, Ifat

    2015-01-01

    HIGHLIGHTS We use a simple gambles design in an fMRI study to compare two conditions: ambiguity and conflict.Participants were more conflict averse than ambiguity averse.Ambiguity aversion did not correlate with conflict aversion.Activation in the medial prefrontal cortex correlated with ambiguity level and ambiguity aversion.Activation in the ventral striatum correlated with conflict level and conflict aversion. Studies of decision making under uncertainty generally focus on imprecise information about outcome probabilities ("ambiguity"). It is not clear, however, whether conflicting information about outcome probabilities affects decision making in the same manner as ambiguity does. Here we combine functional magnetic resonance imaging (fMRI) and a simple gamble design to study this question. In this design the levels of ambiguity and conflict are parametrically varied, and ambiguity and conflict gambles are matched on expected value. Behaviorally, participants avoided conflict more than ambiguity, and attitudes toward ambiguity and conflict did not correlate across participants. Neurally, regional brain activation was differentially modulated by ambiguity level and aversion to ambiguity and by conflict level and aversion to conflict. Activation in the medial prefrontal cortex was correlated with the level of ambiguity and with ambiguity aversion, whereas activation in the ventral striatum was correlated with the level of conflict and with conflict aversion. These novel results indicate that decision makers process imprecise and conflicting information differently, a finding that has important implications for basic and clinical research.

  10. Image restoration using aberration taken by a Hartmann wavefront sensor on extended object, towards real-time deconvolution

    NASA Astrophysics Data System (ADS)

    Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza

    2015-05-01

    In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.

  11. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  12. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Chang-hui; Wei, Kai

    Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.

  13. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

  14. Towards real-time image deconvolution: application to confocal and STED microscopy

    PubMed Central

    Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.

    2013-01-01

    Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127

  15. Removing the echoes from terahertz pulse reflection system and sample

    NASA Astrophysics Data System (ADS)

    Liu, Haishun; Zhang, Zhenwei; Zhang, Cunlin

    2018-01-01

    Due to the echoes both from terahertz (THz) pulse reflection system and sample, the THz primary pulse will be distorted. The system echoes include two types. One preceding the main peak probably is caused by ultrafast laser pulse and the other at the back of the primary pulse is caused by the Fabry-Perot (F-P) etalon effect of detector. We attempt to remove the corresponding echoes by using two kinds of deconvolution. A Si wafer of 400μm was selected as the tested sample. Firstly, the method of double Gaussian filter (DGF) decnvolution was used to remove the systematic echoes, and then another deconvolution technique was employed to eliminate the two obvious echoes of the sample. The ultimate results indicated: although the combination of two deconvolution techniques could not entirely remove the echoes of sample and system, the echoes were largely reduced.

  16. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  17. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navarro, Jorge

    2013-12-01

    The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent tomore » the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method for ATR applications the technique was tested using one-isotope, multi-isotope and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr3 detector in an above the water configuration and deconvolution algorithms.« less

  18. Investigation of the flow structure in thin polymer films using 3D µPTV enhanced by GPU

    NASA Astrophysics Data System (ADS)

    Cavadini, Philipp; Weinhold, Hannes; Tönsmann, Max; Chilingaryan, Suren; Kopmann, Andreas; Lewkowicz, Alexander; Miao, Chuan; Scharfer, Philip; Schabel, Wilhelm

    2018-04-01

    To understand the effects of inhomogeneous drying on the quality of polymer coatings, an experimental setup to resolve the occurring flow field throughout the drying film has been developed. Deconvolution microscopy is used to analyze the flow field in 3D and time. Since the dimension of the spatial component in the direction of the line-of-sight is limited compared to the lateral components, a multi-focal approach is used. Here, the beam of light is equally distributed on up to five cameras using cubic beam splitters. Adding a meniscus lens between each pair of camera and beam splitter and setting different distances between each camera and its meniscus lens creates multi-focality and allows one to increase the depth of the observed volume. Resolving the spatial component in the line-of-sight direction is based on analyzing the point spread function. The analysis of the PSF is computational expensive and introduces a high complexity compared to traditional particle image velocimetry approaches. A new algorithm tailored to the parallel computing architecture of recent graphics processing units has been developed. The algorithm is able to process typical images in less than a second and has further potential to realize online analysis in the future. As a prove of principle, the flow fields occurring in thin polymer solutions drying at ambient conditions and at boundary conditions that force inhomogeneous drying are presented.

  19. Femtosecond transient absorption spectroscopy of silanized silicon quantum dots

    NASA Astrophysics Data System (ADS)

    Kuntermann, Volker; Cimpean, Carla; Brehm, Georg; Sauer, Guido; Kryschi, Carola; Wiggers, Hartmut

    2008-03-01

    Excitonic properties of colloidal silicon quantum dots (Si qdots) with mean sizes of 4nm were examined using stationary and time-resolved optical spectroscopy. Chemically stable silicon oxide shells were prepared by controlled surface oxidation and silanization of HF-etched Si qdots. The ultrafast relaxation dynamics of photogenerated excitons in Si qdot colloids were studied on the picosecond time scale from 0.3psto2.3ns using femtosecond-resolved transient absorption spectroscopy. The time evolution of the transient absorption spectra of the Si qdots excited with a 150fs pump pulse at 390nm was observed to consist of decays of various absorption transitions of photoexcited electrons in the conduction band which overlap with both the photoluminescence and the photobleaching of the valence band population density. Gaussian deconvolution of the spectroscopic data allowed for disentangling various carrier relaxation processes involving electron-phonon and phonon-phonon scatterings or arising from surface-state trapping. The initial energy and momentum relaxation of hot carriers was observed to take place via scattering by optical phonons within 0.6ps . Exciton capturing by surface states forming shallow traps in the amorphous SiOx shell was found to occur with a time constant of 4ps , whereas deeper traps presumably localized in the Si-SiOx interface gave rise to exciton trapping processes with time constants of 110 and 180ps . Electron transfer from initially populated, higher-lying surface states to the conduction band of Si qdots (>2nm) was observed to take place within 400 or 700fs .

  20. Informatics and Standards for Nanomedicine Technology

    PubMed Central

    Thomas, Dennis G.; Klaessig, Fred; Harper, Stacey L.; Fritts, Martin; Hoover, Mark D.; Gaheen, Sharon; Stokes, Todd H.; Reznik-Zellen, Rebecca; Freund, Elaine T.; Klemm, Juli D.; Paik, David S.; Baker, Nathan A.

    2011-01-01

    There are several issues to be addressed concerning the management and effective use of information (or data), generated from nanotechnology studies in biomedical research and medicine. These data are large in volume, diverse in content, and are beset with gaps and ambiguities in the description and characterization of nanomaterials. In this work, we have reviewed three areas of nanomedicine informatics: information resources; taxonomies, controlled vocabularies, and ontologies; and information standards. Informatics methods and standards in each of these areas are critical for enabling collaboration, data sharing, unambiguous representation and interpretation of data, semantic (meaningful) search and integration of data; and for ensuring data quality, reliability, and reproducibility. In particular, we have considered four types of information standards in this review, which are standard characterization protocols, common terminology standards, minimum information standards, and standard data communication (exchange) formats. Currently, due to gaps and ambiguities in the data, it is also difficult to apply computational methods and machine learning techniques to analyze, interpret and recognize patterns in data that are high dimensional in nature, and also to relate variations in nanomaterial properties to variations in their chemical composition, synthesis, characterization protocols, etc. Progress towards resolving the issues of information management in nanomedicine using informatics methods and standards discussed in this review will be essential to the rapidly growing field of nanomedicine informatics. PMID:21721140

Top