Sample records for small aliasing errors

  1. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  2. Probing the Spatio-Temporal Characteristics of Temporal Aliasing Errors and their Impact on Satellite Gravity Retrievals

    NASA Astrophysics Data System (ADS)

    Wiese, D. N.; McCullough, C. M.

    2017-12-01

    Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.

  3. Aliased tidal errors in TOPEX/POSEIDON sea surface height data

    NASA Technical Reports Server (NTRS)

    Schlax, Michael G.; Chelton, Dudley B.

    1994-01-01

    Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.

  4. The Influence of Gantry Geometry on Aliasing and Other Geometry Dependent Errors

    NASA Astrophysics Data System (ADS)

    Joseph, Peter M.

    1980-06-01

    At least three gantry geometries are widely used in medical CT scanners: (1) rotate-translate, (2) rotating detectors, (3) stationary detectors. There are significant geometrical differences between these designs, especially regarding (a) the region of space scanned by any given detector and (b) the sample density of rays which scan the patient. It is imperative to distinguish between "views" and "rays" in analyzing this situation. In particular, views are defined by the x-ray source in type 2 and by the detector in type 3 gantries. It is known that ray dependent errors are generally much more important than view dependent errors. It is shown that spatial resolution is primarily limited by the spacing between rays in any view, while the number of ray samples per beam width determines the extent of aliasing artifacts. Rotating detector gantries are especially susceptible to aliasing effects. It is shown that aliasing effects can distort the point spread function in a way that is highly dependent on the position of the point in the scanned field. Such effects can cause anomalies in the MTF functions as derived from points in machines with significant aliasing problems.

  5. Treatment of ocean tide aliasing in the context of a next generation gravity field mission

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Pail, Roland

    2018-07-01

    Current temporal gravity field solutions from Gravity Recovery and Climate Experiment (GRACE) suffer from temporal aliasing errors due to undersampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean) and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high-resolution temporal gravity fields from future gravity missions such as GRACE Follow-On and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parametrize ocean tide parameters of the eight main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from 1 to 3 yr leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.

  6. Treatment of ocean tide aliasing in the context of a next generation gravity field mission

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Pail, Roland

    2018-04-01

    Current temporal gravity field solutions from GRACE suffer from temporal aliasing errors due to under-sampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean), and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high resolution temporal gravity fields from future gravity missions such as GRACE Follow-on and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parameterize ocean tide parameters of the 8 main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from one to three years leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 per cent and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.

  7. Treatment of temporal aliasing effects in the context of next generation satellite gravimetry missions

    NASA Astrophysics Data System (ADS)

    Daras, Ilias; Pail, Roland

    2017-09-01

    Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.

  8. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  9. A simulation for gravity fine structure recovery from low-low GRAVSAT SST data

    NASA Technical Reports Server (NTRS)

    Estes, R. H.; Lancaster, E. R.

    1976-01-01

    Covariance error analysis techniques were applied to investigate estimation strategies for the low-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. A 5 degree by 5 degree surface density block representation of the high order geopotential was utilized with the drag-free low-low GRAVSAT configuration in a circular polar orbit at 250 km altitude. Recovery of local sets of density blocks from long data arcs was found not to be feasible due to strong aliasing effects. The error analysis for the recovery of local sets of density blocks using independent short data arcs demonstrated that the estimation strategy of simultaneously estimating a local set of blocks covered by data and two "buffer layers" of blocks not covered by data greatly reduced aliasing errors.

  10. GRAVSAT/GEOPAUSE covariance analysis including geopotential aliasing

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1975-01-01

    A conventional covariance analysis for the GRAVSAT/GEOPAUSE mission is described in which the uncertainties of approximately 200 parameters, including the geopotential coefficients to degree and order 12, are estimated over three different tracking intervals. The estimated orbital uncertainties for both GRAVSAT and GEOPAUSE reach levels more accurate than presently available. The adjusted measurement bias errors approach the mission goal. Survey errors in the low centimeter range are achieved after ten days of tracking. The ability of the mission to obtain accuracies of geopotential terms to (12, 12) one to two orders of magnitude superior to present accuracy levels is clearly shown. A unique feature of this report is that the aliasing structure of this (12, 12) field is examined. It is shown that uncertainties for unadjusted terms to (12, 12) still exert a degrading effect upon the adjusted error of an arbitrarily selected term of lower degree and order. Finally, the distribution of the aliasing from the unestimated uncertainty of a particular high degree and order geopotential term upon the errors of all remaining adjusted terms is listed in detail.

  11. A Simple Approach to Fourier Aliasing

    ERIC Educational Resources Information Center

    Foadi, James

    2007-01-01

    In the context of discrete Fourier transforms the idea of aliasing as due to approximation errors in the integral defining Fourier coefficients is introduced and explained. This has the positive pedagogical effect of getting to the heart of sampling and the discrete Fourier transform without having to delve into effective, but otherwise long and…

  12. Site Distribution and Aliasing Effects in the Inversion for Load Coefficients and Geocenter Motion from GPS Data

    NASA Technical Reports Server (NTRS)

    Wu, Xiaoping; Argus, Donald F.; Heflin, Michael B.; Ivins, Erik R.; Webb, Frank H.

    2002-01-01

    Precise GPS measurements of elastic relative site displacements due to surface mass loading offer important constraints on global surface mass transport. We investigate effects of site distribution and aliasing by higher-degree (n greater than or equal 2) loading terms on inversion of GPS data for n = 1 load coefficients and geocenter motion. Covariance and simulation analyses are conducted to assess the sensitivity of the inversion to aliasing and mismodeling errors and possible uncertainties in the n = 1 load coefficient determination. We found that the use of center-of-figure approximation in the inverse formulation could cause 10- 15% errors in the inverted load coefficients. n = 1 load estimates may be contaminated significantly by unknown higher-degree terms, depending on the load scenario and the GPS site distribution. The uncertainty in n = 1 zonal load estimate is at the level of 80 - 95% for two load scenarios.

  13. On the formulation of gravitational potential difference between the GRACE satellites based on energy integral in Earth fixed frame

    NASA Astrophysics Data System (ADS)

    Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.

    2015-09-01

    Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.

  14. Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael

    2017-04-01

    The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.

  15. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  16. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Improvements to photometry. Part 1: Better estimation of derivatives in extinction and transformation equations

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.

    1988-01-01

    Atmospheric extinction in wideband photometry is examined both analytically and through numerical simulations. If the derivatives that appear in the Stromgren-King theory are estimated carefully, it appears that wideband measurements can be transformed to outside the atmosphere with errors no greater than a millimagnitude. A numerical analysis approach is used to estimate derivatives of both the stellar and atmospheric extinction spectra, avoiding previous assumptions that the extinction follows a power law. However, it is essential to satify the requirements of the sampling theorem to keep aliasing errors small. Typically, this means that band separations cannot exceed half of the full width at half-peak response. Further work is needed to examine higher order effects, which may well be significant.

  18. Exploiting the Modified Colombo-Nyquist Rule for Co-estimating Sub-monthly Gravity Field Solutions from a GRACE-like Mission

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Weigelt, M.; Mueller, J.

    2017-12-01

    In order to suppress the impact of aliasing errors on the standard monthly GRACE gravity-field solutions, co-estimating sub-monthly (daily/two-day) low-degree solutions has been suggested as a solution. The maximum degree of the low-degree solutions is chosen via the Colombo-Nyquist rule of thumb. However, it is now established that the sampling of satellites puts a restriction on the maximum estimable order and not the degree - modified Colombo-Nyquist rule. Therefore, in this contribution, we co-estimate low-order sub-monthly solutions, and compare and contrast them with the low-degree sub-monthly solutions. We also investigate their efficacies in dealing with aliasing errors.

  19. Comparison of high resolution x-ray detectors with conventional FPDs using experimental MTFs and apodized aperture pixel design for reduced aliasing

    NASA Astrophysics Data System (ADS)

    Shankar, A.; Russ, M.; Vijayan, S.; Bednarek, D. R.; Rudin, S.

    2017-03-01

    Apodized Aperture Pixel (AAP) design, proposed by Ismailova et.al, is an alternative to the conventional pixel design. The advantages of AAP processing with a sinc filter in comparison with using other filters include non-degradation of MTF values and elimination of signal and noise aliasing, resulting in an increased performance at higher frequencies, approaching the Nyquist frequency. If high resolution small field-of-view (FOV) detectors with small pixels used during critical stages of Endovascular Image Guided Interventions (EIGIs) could also be extended to cover a full field-of-view typical of flat panel detectors (FPDs) and made to have larger effective pixels, then methods must be used to preserve the MTF over the frequency range up to the Nyquist frequency of the FPD while minimizing aliasing. In this work, we convolve the experimentally measured MTFs of an Microangiographic Fluoroscope (MAF) detector, (the MAF-CCD with 35μm pixels) and a High Resolution Fluoroscope (HRF) detector (HRF-CMOS50 with 49.5μm pixels) with the AAP filter and show the superiority of the results compared to MTFs resulting from moving average pixel binning and to the MTF of a standard FPD. The effect of using AAP is also shown in the spatial domain, when used to image an infinitely small point object. For detectors in neurovascular interventions, where high resolution is the priority during critical parts of the intervention, but full FOV with larger pixels are needed during less critical parts, AAP design provides an alternative to simple pixel binning while effectively eliminating signal and noise aliasing yet allowing the small FOV high resolution imaging to be maintained during critical parts of the EIGI.

  20. Constellations of Next Generation Gravity Missions: Simulations regarding optimal orbits and mitigation of aliasing errors

    NASA Astrophysics Data System (ADS)

    Hauk, M.; Pail, R.; Gruber, T.; Purkhauser, A.

    2017-12-01

    The CHAMP and GRACE missions have demonstrated the tremendous potential for observing mass changes in the Earth system from space. In order to fulfil future user needs a monitoring of mass distribution and mass transport with higher spatial and temporal resolution is required. This can be achieved by a Bender-type Next Generation Gravity Mission (NGGM) consisting of a constellation of satellite pairs flying in (near-)polar and inclined orbits, respectively. For these satellite pairs the observation concept of the GRACE Follow-on mission with a laser-based low-low satellite-to-satellite tracking (ll-SST) system and more precise accelerometers and state-of-the-art star trackers is adopted. By choosing optimal orbit constellations for these satellite pairs high frequency mass variations will be observable and temporal aliasing errors from under-sampling will not be the limiting factor anymore. As part of the European Space Agency (ESA) study "ADDCON" (ADDitional CONstellation and Scientific Analysis Studies of the Next Generation Gravity Mission) a variety of mission design parameters for such constellations are investigated by full numerical simulations. These simulations aim at investigating the impact of several orbit design choices and at the mitigation of aliasing errors in the gravity field retrieval by co-parametrization for various constellations of Bender-type NGGMs. Choices for orbit design parameters such as altitude profiles during mission lifetime, length of retrieval period, value of sub-cycles and choice of prograde versus retrograde orbits are investigated as well. Results of these simulations are presented and optimal constellations for NGGM's are identified. Finally, a short outlook towards new geophysical applications like a near real time service for hydrology is given.

  1. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  2. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  3. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  4. Mapping GRACE Accelerometer Error

    NASA Astrophysics Data System (ADS)

    Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.

    2017-12-01

    After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.

  5. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  6. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  7. De-aliasing for signal restoration in Propeller MR imaging.

    PubMed

    Chiu, Su-Chin; Chang, Hing-Chiu; Chu, Mei-Lan; Wu, Ming-Long; Chung, Hsiao-Wen; Lin, Yi-Ru

    2017-02-01

    Objects falling outside of the true elliptical field-of-view (FOV) in Propeller imaging show unique aliasing artifacts. This study proposes a de-aliasing approach to restore the signal intensities in Propeller images without extra data acquisition. Computer simulation was performed on the Shepp-Logan head phantom deliberately placed obliquely to examine the signal aliasing. In addition, phantom and human imaging experiments were performed using Propeller imaging with various readouts on a 3.0 Tesla MR scanner. De-aliasing using the proposed method was then performed, with the first low-resolution single-blade image used to find out the aliasing patterns in all the single-blade images, followed by standard Propeller reconstruction. The Propeller images without and with de-aliasing were compared. Computer simulations showed signal loss at the image corners along with aliasing artifacts distributed along directions corresponding to the rotational blades, consistent with clinical observations. The proposed de-aliasing operation successfully restored the correct images in both phantom and human experiments. The de-aliasing operation is an effective adjunct to Propeller MR image reconstruction for retrospective restoration of aliased signals. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    USGS Publications Warehouse

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using linear time interpolation for the resamplin

  9. Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.

    PubMed

    Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi

    2017-05-01

    When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.

  10. DNS load balancing in the CERN cloud

    NASA Astrophysics Data System (ADS)

    Reguero Naredo, Ignacio; Lobato Pardavila, Lorena

    2017-10-01

    Load Balancing is one of the technologies enabling deployment of large-scale applications on cloud resources. A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence. It currently serves over 450 load-balanced aliases with two small VMs acting as master and slave. The aliases are mapped to DNS subdomains. These subdomains are managed with DDNS according to a load metric, which is collected from the alias member nodes with SNMP. During the last years, several improvements were brought to the software, for instance: support for IPv6, parallelization of the status requests, implementing the client in Python to allow for multiple aliases with differentiated states on the same machine or support for application state. The configuration of the Load Balancer is currently managed by a Puppet type. It discovers the alias member nodes and gets the alias definitions from the Ermis REST service. The Aiermis self-service GUI for the management of the LB aliases has been produced and is based on the Ermis service above that implements a form of Load Balancing as a Service (LBaaS). The Ermis REST API has authorisation based in Foreman hostgroups. The CERN DNS LBD is Open Software with Apache 2 license.

  11. Subdaily alias and draconitic errors in the IGS orbits

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Ray, J.

    2011-12-01

    Harmonic signals with a fundamental period near the GPS draconitic year (351.2 d) and overtones up to the 8th multiple have been observed in the power spectra of nearly all products of the International GNSS Service (IGS), including station position time series [Ray et al., 2008; Collilieux et al., 2007; Santamaría-Gómez et al., 2011], apparent geocenter motions [Hugentobler et al., 2008], and orbit jumps between successive days and midnight discontinuities in Earth orientation parameter (EOP) rates [Ray and Griffiths, 2009]. Ray et al. [2008] suggested two mechanisms for the harmonics: mismodeling of orbit dynamics and aliasing of near-sidereal local station multipath effects. King and Watson [2010] have studied the propagation of local multipath errors into draconitic position variations, but orbit-related processes have been less well examined. Here we elaborate our earlier analysis of GPS orbit jumps [Griffiths and Ray, 2009; Gendt et al., 2010] where we observed some draconitic features as well as prominent spectral bands near 29, 14, 9, and 7 d periods. Finer structures within the sub-seasonal bands fall close to the expected alias frequencies of subdaily EOP tide lines but do not coincide precisely. While once-per-rev empirical orbit parameters should strongly absorb any subdaily EOP tide errors due to near-resonance of their respective periods, the observed differences require explanation. This has been done by simulating known EOP tidal errors and checking their impact on a long series of daily GPS orbits. Indeed, simulated tidal aliases are found to be very similar to the observed orbital features in the sub-seasonal bands. Moreover and unexpectedly, some low draconitic harmonics were also stimulated, potentially a source for the widespread errors in most IGS products.

  12. Precise automatic differential stellar photometry

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.; Genet, Russell M.; Boyd, Louis J.; Borucki, William J.; Lockwood, G. Wesley

    1991-01-01

    The factors limiting the precision of differential stellar photometry are reviewed. Errors due to variable atmospheric extinction can be reduced to below 0.001 mag at good sites by utilizing the speed of robotic telescopes. Existing photometric systems produce aliasing errors, which are several millimagnitudes in general but may be reduced to about a millimagnitude in special circumstances. Conventional differential photometry neglects several other important effects, which are discussed in detail. If all of these are properly handled, it appears possible to do differential photometry of variable stars with an overall precision of 0.001 mag with ground based robotic telescopes.

  13. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  14. On the use of kinetic energy preserving DG-schemes for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Flad, David; Gassner, Gregor

    2017-12-01

    Recently, element based high order methods such as Discontinuous Galerkin (DG) methods and the closely related flux reconstruction (FR) schemes have become popular for compressible large eddy simulation (LES). Element based high order methods with Riemann solver based interface numerical flux functions offer an interesting dispersion dissipation behavior for multi-scale problems: dispersion errors are very low for a broad range of scales, while dissipation errors are very low for well resolved scales and are very high for scales close to the Nyquist cutoff. In some sense, the inherent numerical dissipation caused by the interface Riemann solver acts as a filter of high frequency solution components. This observation motivates the trend that element based high order methods with Riemann solvers are used without an explicit LES model added. Only the high frequency type inherent dissipation caused by the Riemann solver at the element interfaces is used to account for the missing sub-grid scale dissipation. Due to under-resolution of vortical dominated structures typical for LES type setups, element based high order methods suffer from stability issues caused by aliasing errors of the non-linear flux terms. A very common strategy to fight these aliasing issues (and instabilities) is so-called polynomial de-aliasing, where interpolation is exchanged with projection based on an increased number of quadrature points. In this paper, we start with this common no-model or implicit LES (iLES) DG approach with polynomial de-aliasing and Riemann solver dissipation and review its capabilities and limitations. We find that the strategy gives excellent results, but only when the resolution is such, that about 40% of the dissipation is resolved. For more realistic, coarser resolutions used in classical LES e.g. of industrial applications, the iLES DG strategy becomes quite inaccurate. We show that there is no obvious fix to this strategy, as adding for instance a sub-grid-scale models on top doesn't change much or in worst case decreases the fidelity even more. Finally, the core of this work is a novel LES strategy based on split form DG methods that are kinetic energy preserving. The scheme offers excellent stability with full control over the amount and shape of the added artificial dissipation. This premise is the main idea of the work and we will assess the LES capabilities of the novel split form DG approach when applied to shock-free, moderate Mach number turbulence. We will demonstrate that the novel DG LES strategy offers similar accuracy as the iLES methodology for well resolved cases, but strongly increases fidelity in case of more realistic coarse resolutions.

  15. Error analysis for spectral approximation of the Korteweg-De Vries equation

    NASA Technical Reports Server (NTRS)

    Maday, Y.; recent years.

    1987-01-01

    The conservation and convergence properties of spectral Fourier methods for the numerical approximation of the Korteweg-de Vries equation are analyzed. It is proved that the (aliased) collocation pseudospectral method enjoys the same convergence properties as the spectral Galerkin method, which is less effective from the computational point of view. This result provides a precise mathematical answer to a question raised by several authors in recent years.

  16. Shearlet transform in aliased ground roll attenuation and its comparison with f-k filtering and curvelet transform

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-06-01

    Ground roll, which is a Rayleigh surface wave that exists in land seismic data, may mask reflections. Sometimes ground roll is spatially aliased. Attenuation of aliased ground roll is of importance in seismic data processing. Different methods have been developed to attenuate ground roll. The shearlet transform is a directional and multidimensional transform that generates subimages of an input image in different directions and scales. Events with different dips are separated in these subimages. In this study, the shearlet transform is used to attenuate the aliased ground roll. To do this, a shot record is divided into several segments, and the appropriate mute zone is defined for all segments. The shearlet transform is applied to each segment. The subimages related to the non-aliased and aliased ground roll are identified by plotting the energy distributions of subimages with visual checking. Then, muting filters are used on selected subimages. The inverse shearlet transform is applied to the filtered segment. This procedure is repeated for all segments. Finally, all filtered segments are merged using the Hanning window. This method of aliased ground roll attenuation was tested on a synthetic dataset and a field shot record from the west of Iran. The synthetic shot record included strong aliased ground roll, whereas the field shot record did not. To produce the strong aliased ground roll on the field shot record, the data were resampled in the offset direction from 30 to 60 m. To show the performance of the shearlet transform in attenuating the aliased ground roll, we compared the shearlet transform with the f-k filtering and curvelet transform. We showed that the performance of the shearlet transform in the aliased ground roll attenuation is better than that of the f-k filtering and curvelet transform in both the synthetic and field shot records. However, when the dip and frequency content of the aliased ground roll are the same as the reflections, ability of the shearlet transform is limited in attenuating the aliased ground roll.

  17. 76 FR 21628 - Implementation of Additional Changes From the Annual Review of the Entity List; Removal of Person...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... Engineering Physics.'' The changes included revising the entry to add additional aliases for that entry. The... listing the aliases as separate aliases for the Chinese Academy of Engineering Physics. China (1) Chinese Academy of Engineering Physics, a.k.a., the following nineteen aliases: --Ninth Academy; --Southwest...

  18. Digital Moiré based transient interferometry and its application in optical surface measurement

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Tan, Yifeng; Wang, Shaopu; Hu, Yao

    2017-10-01

    Digital Moiré based transient interferometry (DMTI) is an effective non-contact testing methods for optical surfaces. In DMTI system, only one frame of real interferogram is experimentally captured for the transient measurement of the surface under test (SUT). When combined with partial compensation interferometry (PCI), DMTI is especially appropriate for the measurement of aspheres with large apertures, large asphericity or different surface parameters. Residual wavefront is allowed in PCI, so the same partial compensator can be applied to the detection of multiple SUTs. Excessive residual wavefront aberration results in spectrum aliasing, and the dynamic range of DMTI is limited. In order to solve this problem, a method based on wavelet transform is proposed to extract phase from the fringe pattern with spectrum aliasing. Results of simulation demonstrate the validity of this method. The dynamic range of Digital Moiré technology is effectively expanded, which makes DMTI prospective in surface figure error measurement for intelligent fabrication of aspheric surfaces.

  19. Fluid Motion and the Toroidal Magnetic Field Near the Top of Earth's Liquid Outer Core.

    NASA Astrophysics Data System (ADS)

    Celaya, Michael Augustine

    This work considers two unresolved problems central to the study of Earth's deep interior: (1) What is the surface flow of the complete three dimensional motion sustaining the geomagnetic field in the fluid outer core? (2) How strong is the toroidal component of that field just beneath the mantle inside the core?. A solution of these problems is necessary to achieve even a basic understanding of magnetic field generation, and core-mantle interactions. Progress in solving (1) is made by extending previous attempts to resolve the core surface flow, and identifying obstacles which lead to distorted solutions. The extension relaxes the steady motions constraint. This permits more realistic solutions which should resemble more closely the real Earth flow. A difficulty with the assumption of steady flow is that if the real motion is unsteady, as it is likely to be, then steady models will suffer from aliasing. Aliased solutions can be highly corrupted. The effects of aliasing incurred through model underparametrization are explored. It is found that flow spectral energy must fall rapidly with increasing degree to escape aliasing's distortion. Damping does not appear to remedy the problem, but in fact obscures it by forcing the solution to converge upon a single, but possibly still aliased estimate. Inversions of a magnetic field model for unsteady motions, indicate steady flows are indeed aliased in time. By comparison, unsteady flows appear free of aliasing and show significant temporal variation, changing by about 30% of their magnitude over 20 years. However, it appears that noise in the high degree secular variation (SV) data used to determine the flow acts as a further impediment to solving (1). Damping is shown to be effective in removing noise, but only once aliasing is no longer a factor and noise is restricted to that part of the SV which makes only a small contribution to the solution. To solve (2) the radial component of Ohm's law is inverted for the toroidal field (B_{T }) near the top of the corp. The flow, obtained as a solution to (1), is treated as a known quantity, as is the poloidal field. Solutions are sought which minimize the difference between observed and predicted poloidal main field at Earth's surface. As in problem (1), aliasing in space and time stand as potential impediments to good resolution of the toroidal field. Steady degree 10 models of B_{T} are obtained which display convergence in space and time without damping. Poloidal field noise, as well as sensitivity to the flow model used in the inversions, limit resolution of toroidal field geometry. Nevertheless, estimates indicate the magnitude of B_{T } does not exceed 8times 10^ {-5}T, or about half that of the poloidal field near the core surface. Such a low value favors weak -field dynamo models but does not necessarily endorse a geostrophic force balance just beneath the mantle because partial_{r}B _{T} may be large enough to violate conditions required by geostrophy.

  20. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  1. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  2. Separation of parallel encoded complex-valued slices (SPECS) from a single complex-valued aliased coil image.

    PubMed

    Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C

    2016-04-01

    Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    NASA Astrophysics Data System (ADS)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  4. Adaptive attenuation of aliased ground roll using the shearlet transform

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  5. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  6. An interactive Doppler velocity dealiasing scheme

    NASA Astrophysics Data System (ADS)

    Pan, Jiawen; Chen, Qi; Wei, Ming; Gao, Li

    2009-10-01

    Doppler weather radars are capable of providing high quality wind data at a high spatial and temporal resolution. However, operational application of Doppler velocity data from weather radars is hampered by the infamous limitation of the velocity ambiguity. This paper reviews the cause of velocity folding and presents the unfolding method recently implemented for the CINRAD systems. A simple interactive method for velocity data, which corrects de-aliasing errors, has been developed and tested. It is concluded that the algorithm is very efficient and produces high quality velocity data.

  7. Wiener-matrix image restoration beyond the sampling passband

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    A finer-than-sampling-lattice resolution image can be obtained using multiresponse image gathering and Wiener-matrix restoration. The multiresponse image gathering weighs the within-passband and aliased signal components differently, allowing the Wiener-matrix restoration filter to unscramble these signal components and restore spatial frequencies beyond the sampling passband of the photodetector array. A multiresponse images can be reassembled into a single minimum mean square error image with a resolution that is sq rt A times finer than the photodetector-array sampling lattice.

  8. Reconstruction of full high-resolution HSQC using signal split in aliased spectra.

    PubMed

    Foroozandeh, Mohammadali; Jeannerat, Damien

    2015-11-01

    Resolution enhancement is a long-sought goal in NMR spectroscopy. In conventional multidimensional NMR experiments, such as the (1) H-(13) C HSQC, the resolution in the indirect dimensions is typically 100 times lower as in 1D spectra because it is limited by the experimental time. Reducing the spectral window can significantly increase the resolution but at the cost of ambiguities in frequencies as a result of spectral aliasing. Fortunately, this information is not completely lost and can be retrieved using methods in which chemical shifts are encoded in the aliased spectra and decoded after processing to reconstruct high-resolution (1) H-(13) C HSQC spectrum with full spectral width and a resolution similar to that of 1D spectra. We applied a new reconstruction method, RHUMBA (reconstruction of high-resolution using multiplet built on aliased spectra), to spectra obtained from the differential evolution for non-ambiguous aliasing-HSQC and the new AMNA (additional modulation for non-ambiguous aliasing)-HSQC experiments. The reconstructed spectra significantly facilitate both manual and automated spectral analyses and structure elucidation based on heteronuclear 2D experiments. The resolution is enhanced by two orders of magnitudes without the usual complications due to spectral aliasing. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Simulation Study of a Follow-on Gravity Mission to GRACE

    NASA Technical Reports Server (NTRS)

    Loomis, Bryant D.; Nerem, R. S.; Luthcke, Scott B.

    2012-01-01

    The gravity recovery and climate experiment (GRACE) has been providing monthly estimates of the Earth's time-variable gravity field since its launch in March 2002. The GRACE gravity estimates are used to study temporal mass variations on global and regional scales, which are largely caused by a redistribution of water mass in the Earth system. The accuracy of the GRACE gravity fields are primarily limited by the satellite-to-satellite range-rate measurement noise, accelerometer errors, attitude errors, orbit errors, and temporal aliasing caused by unmodeled high-frequency variations in the gravity signal. Recent work by Ball Aerospace and Technologies Corp., Boulder, CO has resulted in the successful development of an interferometric laser ranging system to specifically address the limitations of the K-band microwave ranging system that provides the satellite-to-satellite measurements for the GRACE mission. Full numerical simulations are performed for several possible configurations of a GRACE Follow-On (GFO) mission to determine if a future satellite gravity recovery mission equipped with a laser ranging system will provide better estimates of time-variable gravity, thus benefiting many areas of Earth systems research. The laser ranging system improves the range-rate measurement precision to approximately 0.6 nm/s as compared to approx. 0.2 micro-seconds for the GRACE K-band microwave ranging instrument. Four different mission scenarios are simulated to investigate the effect of the better instrument at two different altitudes. The first pair of simulated missions is flown at GRACE altitude (approx. 480 km) assuming on-board accelerometers with the same noise characteristics as those currently used for GRACE. The second pair of missions is flown at an altitude of approx. 250 km which requires a drag-free system to prevent satellite re-entry. In addition to allowing a lower satellite altitude, the drag-free system also reduces the errors associated with the accelerometer. All simulated mission scenarios assume a two satellite co-orbiting pair similar to GRACE in a near-polar, near-circular orbit. A method for local time variable gravity recovery through mass concentration blocks (mascons) is used to form simulated gravity estimates for Greenland and the Amazon region for three GFO configurations and GRACE. Simulation results show that the increased precision of the laser does not improve gravity estimation when flown with on-board accelerometers at the same altitude and spacecraft separation as GRACE, even when time-varying background models are not included. This study also shows that only modest improvement is realized for the best-case scenario (laser, low-altitude, drag-free) as compared to GRACE due to temporal aliasing errors. These errors are caused by high-frequency variations in the hydrology signal and imperfections in the atmospheric, oceanographic, and tidal models which are used to remove unwanted signal. This work concludes that applying the updated technologies alone will not immediately advance the accuracy of the gravity estimates. If the scientific objectives of a GFO mission require more accurate gravity estimates, then future work should focus on improvements in the geophysical models, and ways in which the mission design or data processing could reduce the effects of temporal aliasing.

  10. Evaluating Health Outcomes of Criminal Justice Populations Using Record Linkage: The Importance of Aliases

    ERIC Educational Resources Information Center

    Larney, Sarah; Burns, Lucy

    2011-01-01

    Individuals in contact with the criminal justice system are a key population of concern to public health. Record linkage studies can be useful for studying health outcomes for this group, but the use of aliases complicates the process of linking records across databases. This study was undertaken to determine the impact of aliases on sensitivity…

  11. Cartographic symbol library considering symbol relations based on anti-aliasing graphic library

    NASA Astrophysics Data System (ADS)

    Mei, Yang; Li, Lin

    2007-06-01

    Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.

  12. Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.

    PubMed

    Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan

    2018-05-01

    RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.

  13. A novel aliasing-free subband information fusion approach for wideband sparse spectral estimation

    NASA Astrophysics Data System (ADS)

    Luo, Ji-An; Zhang, Xiao-Ping; Wang, Zhi

    2017-12-01

    Wideband sparse spectral estimation is generally formulated as a multi-dictionary/multi-measurement (MD/MM) problem which can be solved by using group sparsity techniques. In this paper, the MD/MM problem is reformulated as a single sparse indicative vector (SIV) recovery problem at the cost of introducing an additional system error. Thus, the number of unknowns is reduced greatly. We show that the system error can be neglected under certain conditions. We then present a new subband information fusion (SIF) method to estimate the SIV by jointly utilizing all the frequency bins. With orthogonal matching pursuit (OMP) leveraging the binary property of SIV's components, we develop a SIF-OMP algorithm to reconstruct the SIV. The numerical simulations demonstrate the performance of the proposed method.

  14. Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms

    DTIC Science & Technology

    2009-09-01

    spread function In the literature these types of algorithms are sometimes hcluded under the broad umbrella of superresolution . However, in the current...We use one of these patterns to visually demonstrate successful de-aliasing 15. SUBJECT TERMS Image de-aliasing Superresolution Microscanning Image...undersampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution . However, in

  15. Viewing-zone enlargement method for sampled hologram that uses high-order diffraction.

    PubMed

    Mishina, Tomoyuki; Okui, Makoto; Okano, Fumio

    2002-03-10

    We demonstrate a method of enlarging the viewing zone for holography that has holograms with a pixel structure. First, aliasing generated by the sampling of a hologram by pixel is described. Next the high-order diffracted beams reproduced from the hologram that contains aliasing are explained. Finally, we show that the viewing zone can be enlarged by combining these high-order reconstructed beams from the hologram with aliasing.

  16. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  17. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, L.; Wang, G.; Wessel, P.

    2017-12-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3cm×3cm) to handprint (e.g., 10cm×10cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain portable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem and implement an anti-aliasing procedure of regridding dense TLS data. The TLS data collected in the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as anti-aliasing filters. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  18. RADIAL VELOCITY PLANETS DE-ALIASED: A NEW, SHORT PERIOD FOR SUPER-EARTH 55 Cnc e

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Rebekah I.; Fabrycky, Daniel C., E-mail: rdawson@cfa.harvard.ed, E-mail: daniel.fabrycky@gmail.co

    2010-10-10

    Radial velocity measurements of stellar reflex motion have revealed many extrasolar planets, but gaps in the observations produce aliases, spurious frequencies that are frequently confused with the planets' orbital frequencies. In the case of Gl 581 d, the distinction between an alias and the true frequency was the distinction between a frozen, dead planet and a planet possibly hospitable to life. To improve the characterization of planetary systems, we describe how aliases originate and present a new approach for distinguishing between orbital frequencies and their aliases. Our approach harnesses features in the spectral window function to compare the amplitude andmore » phase of predicted aliases with peaks present in the data. We apply it to confirm prior alias distinctions for the planets GJ 876 d and HD 75898 b. We find that the true periods of Gl 581 d and HD 73526 b/c remain ambiguous. We revise the periods of HD 156668 b and 55 Cnc e, which were afflicted by daily aliases. For HD 156668 b, the correct period is 1.2699 days and the minimum mass is (3.1 {+-} 0.4) M{sub +}. For 55 Cnc e, the correct period is 0.7365 days-the shortest of any known planet-and the minimum mass is (8.3 {+-} 0.3) M{sub +}. This revision produces a significantly improved five-planet Keplerian fit for 55 Cnc, and a self-consistent dynamical fit describes the data just as well. As radial velocity techniques push to ever-smaller planets, often found in systems of multiple planets, distinguishing true periods from aliases will become increasingly important.« less

  19. Moving microphone arrays to reduce spatial aliasing in the beamforming technique: theoretical background and numerical investigation.

    PubMed

    Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello

    2008-12-01

    This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.

  20. Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument

    NASA Astrophysics Data System (ADS)

    Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory

    2014-10-01

    The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.

  1. Assessment of terrestrial water contributions to polar motion from GRACE and hydrological models

    NASA Astrophysics Data System (ADS)

    Jin, S. G.; Hassan, A. A.; Feng, G. P.

    2012-12-01

    The hydrological contribution to polar motion is a major challenge in explaining the observed geodetic residual of non-atmospheric and non-oceanic excitations since hydrological models have limited input of comprehensive global direct observations. Although global terrestrial water storage (TWS) estimated from the Gravity Recovery and Climate Experiment (GRACE) provides a new opportunity to study the hydrological excitation of polar motion, the GRACE gridded data are subject to the post-processing de-striping algorithm, spatial gridded mapping and filter smoothing effects as well as aliasing errors. In this paper, the hydrological contributions to polar motion are investigated and evaluated at seasonal and intra-seasonal time scales using the recovered degree-2 harmonic coefficients from all GRACE spherical harmonic coefficients and hydrological models data with the same filter smoothing and recovering methods, including the Global Land Data Assimilation Systems (GLDAS) model, Climate Prediction Center (CPC) model, the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis products and European Center for Medium-Range Weather Forecasts (ECMWF) operational model (opECMWF). It is shown that GRACE is better in explaining the geodetic residual of non-atmospheric and non-oceanic polar motion excitations at the annual period, while the models give worse estimates with a larger phase shift or amplitude bias. At the semi-annual period, the GRACE estimates are also generally closer to the geodetic residual, but with some biases in phase or amplitude due mainly to some aliasing errors at near semi-annual period from geophysical models. For periods less than 1-year, the hydrological models and GRACE are generally worse in explaining the intraseasonal polar motion excitations.

  2. Pattern recognition invariant under changes of scale and orientation

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  3. The resolution capability of an irregularly sampled dataset: With application to Geosat altimeter data

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1994-01-01

    A formalism is presented for determining the wavenumber-frequency transfer function associated with an irregularly sampled multidimensional dataset. This transfer function reveals the filtering characteristics and aliasing patterns inherent in the sample design. In combination with information about the spectral characteristics of the signal, the transfer function can be used to quantify the spatial and temporal resolution capability of the dataset. Application of the method to idealized Geosat altimeter data (i.e., neglecting measurement errors and data dropouts) concludes that the Geosat orbit configuration is capable of resolving scales of about 3 deg in latitude and longitude by about 30 days.

  4. Shape of the ocean surface and implications for the Earth's interior: GEOS-3 results

    NASA Technical Reports Server (NTRS)

    Chapman, M. E.; Talwani, M.; Kahle, H.; Bodine, J. H.

    1979-01-01

    A new set of 1 deg x 1 deg mean free air anomalies was used to construct a gravimetric geoid by Stokes' formula for the Indian Ocean. Utilizing such 1 deg x 1 deg geoid comparisons were made with GEOS-3 radar altimeter estimates of geoid height. Most commonly there were constant offsets and long wavelength discrepancies between the two data sets; there were many probable causes including radial orbit error, scale errors in the geoid, or bias errors in altitude determination. Across the Aleutian Trench the 1 deg x 1 deg gravimetric geoids did not measure the entire depth of the geoid anomaly due to averaging over 1 deg squares and subsequent aliasing of the data. After adjustment of GEOS-3 data to eliminate long wavelength discrepancies, agreement between the altimeter geoid and gravimetric geoid was between 1.7 and 2.7 meters in rms errors. For purposes of geological interpretation, techniques were developed to directly compute the geoid anomaly over models of density within the Earth. In observing the results from satellite altimetry it was possible to identify geoid anomalies over different geologic features in the ocean. Examples and significant results are reported.

  5. Anti-aliasing Wiener filtering for wave-front reconstruction in the spatial-frequency domain for high-order astronomical adaptive-optics systems.

    PubMed

    Correia, Carlos M; Teixeira, Joel

    2014-12-01

    Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.

  6. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, Lin.; Wang, Guoquan; Wessel, Paul

    2017-03-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3 cm×3 cm) to handprint (e.g., 10 cm×10 cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain manageable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing caused by downsampling have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem of regridding dense TLS data. The TLS data collected from the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as an anti-aliasing filter. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with two different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  7. Sampling frequency for water quality variables in streams: Systems analysis to quantify minimum monitoring rates.

    PubMed

    Chappell, Nick A; Jones, Timothy D; Tych, Wlodek

    2017-10-15

    Insufficient temporal monitoring of water quality in streams or engineered drains alters the apparent shape of storm chemographs, resulting in shifted model parameterisations and changed interpretations of solute sources that have produced episodes of poor water quality. This so-called 'aliasing' phenomenon is poorly recognised in water research. Using advances in in-situ sensor technology it is now possible to monitor sufficiently frequently to avoid the onset of aliasing. A systems modelling procedure is presented allowing objective identification of sampling rates needed to avoid aliasing within strongly rainfall-driven chemical dynamics. In this study aliasing of storm chemograph shapes was quantified by changes in the time constant parameter (TC) of transfer functions. As a proportion of the original TC, the onset of aliasing varied between watersheds, ranging from 3.9-7.7 to 54-79 %TC (or 110-160 to 300-600 min). However, a minimum monitoring rate could be identified for all datasets if the modelling results were presented in the form of a new statistic, ΔTC. For the eight H + , DOC and NO 3 -N datasets examined from a range of watershed settings, an empirically-derived threshold of 1.3(ΔTC) could be used to quantify minimum monitoring rates within sampling protocols to avoid artefacts in subsequent data analysis. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  9. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    PubMed

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  10. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.

  11. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  12. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  13. Fourier Theory Explanation for the Sampling Theorem Demonstrated by a Laboratory Experiment.

    ERIC Educational Resources Information Center

    Sharma, A.; And Others

    1996-01-01

    Describes a simple experiment that uses a CCD video camera, a display monitor, and a laser-printed bar pattern to illustrate signal sampling problems that produce aliasing or moiri fringes in images. Uses the Fourier transform to provide an appropriate and elegant means to explain the sampling theorem and the aliasing phenomenon in CCD-based…

  14. On the aliasing of the solar cycle in the lower stratospheric tropical temperature

    NASA Astrophysics Data System (ADS)

    Kuchar, Ales; Ball, William T.; Rozanov, Eugene V.; Stenke, Andrea; Revell, Laura; Miksovsky, Jiri; Pisoft, Petr; Peter, Thomas

    2017-09-01

    The double-peaked response of the tropical stratospheric temperature profile to the 11 year solar cycle (SC) has been well documented. However, there are concerns about the origin of the lower peak due to potential aliasing with volcanic eruptions or the El Niño-Southern Oscillation (ENSO) detected using multiple linear regression analysis. We confirm the aliasing using the results of the chemistry-climate model (CCM) SOCOLv3 obtained in the framework of the International Global Atmospheric Chemisty/Stratosphere-troposphere Processes And their Role in Climate Chemistry-Climate Model Initiative phase 1. We further show that even without major volcanic eruptions included in transient simulations, the lower stratospheric response exhibits a residual peak when historical sea surface temperatures (SSTs)/sea ice coverage (SIC) are used. Only the use of climatological SSTs/SICs in addition to background stratospheric aerosols removes volcanic and ENSO signals and results in an almost complete disappearance of the modeled solar signal in the lower stratospheric temperature. We demonstrate that the choice of temporal subperiod considered for the regression analysis has a large impact on the estimated profile signal in the lower stratosphere: at least 45 consecutive years are needed to avoid the large aliasing effect of SC maxima with volcanic eruptions in 1982 and 1991 in historical simulations, reanalyses, and observations. The application of volcanic forcing compiled for phase 6 of the Coupled Model Intercomparison Project (CMIP6) in the CCM SOCOLv3 reduces the warming overestimation in the tropical lower stratosphere and the volcanic aliasing of the temperature response to the SC, although it does not eliminate it completely.

  15. 78 FR 69927 - In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... DEPARTMENT OF STATE [Public Notice 8527] In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Record...

  16. 75 FR 28849 - Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... DEPARTMENT OF STATE [Public Notice 7026] Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Records assembled in these...

  17. Anti-aliasing filter design on spaceborne digital receiver

    NASA Astrophysics Data System (ADS)

    Yu, Danru; Zhao, Chonghui

    2009-12-01

    In recent years, with the development of satellite observation technologies, more and more active remote sensing technologies are adopted in spaceborne system. The spaceborne precipitation radar will depend heavily on high performance digital processing to collect meaningful rain echo data. It will increase the complexity of the spaceborne system and need high-performance and reliable digital receiver. This paper analyzes the frequency aliasing in the intermediate frequency signal sampling of digital down conversion in spaceborne radar, and gives an effective digital filter. By analysis and calculation, we choose reasonable parameters of the half-band filters to suppress the frequency aliasing on DDC. Compared with traditional filter, the FPGA resources cost in our system are reduced by over 50%. This can effectively reduce the complexity in the spaceborne digital receiver and improve the reliability of system.

  18. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  19. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  20. A Deep Analysis of Center Displacement in An Idealized Tropical Cyclone with Low-wavenumber Asymmetries

    NASA Astrophysics Data System (ADS)

    Zhao, C.; Song, J.; Leng, H.

    2017-12-01

    The Tropical Cyclone(TC) center-finding technique plays an important role when diagnostic analyses of TC structure are performed, especially when dealing with low-wavenumber asymmetries. Previous works have already established that structure of TCs can vary greatly depending on the displacement induced by center-finding techniques. As it is difficult to define a true TC center in the real world, this work seeks to explore how low-wavenumber azimuthal Fourier analyses can vary with center displacement using idealized, parametric TC-like vortices with different perturbation structures. It is shown that the errors is sensitive to the location and radial structure of the adding perturbation. In the case of adding azimuthal wavenumber 1 and 3 asymmetries, the increasing radial shear of initial asymmetries will enhance the corresponding spectral energy around radius of maximum wind(RMW) significantly, and they also have a great effect on spectral energy of wavenumber 2. On the contrary, the wavenumber 2 cases show a reduction from 1RMW to outer radius when shear is increasing and has little effect on spectral energy of wavenumber 1 or 2. Pervious findings indicated that the aliasing is dependent on the placement of center relative to the location of the asymmetries, which is also valid in these shearing situations. Moreover, it is found that this aliasing caused by phase displacement is less sensitive with the radial shear in wavenumber 2 and 3 cases, while it shows an significant amplification and deformation when wavenumber 1 asymmetry is added.

  1. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  2. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  3. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  4. Infrared Sensor Readout Design

    DTIC Science & Technology

    1975-11-01

    Line Replaceable Unit LT Level Translator MRT Minimum Resolvable Temperature MTF Modulation Transfer Function PC Printed Circuit SCCCD Surface...reduced, not only will the aliased noise increase, but signal aliasing will also start to occur. Atlbe display level this means that sharp edges could...converted from a quantity ol charge to a voltage- level shift by the action ol the precharge pulse that presets the potential on the output diode node to

  5. Staggered Multiple-PRF Ultrafast Color Doppler.

    PubMed

    Posada, Daniel; Poree, Jonathan; Pellissier, Arnaud; Chayer, Boris; Tournoux, Francois; Cloutier, Guy; Garcia, Damien

    2016-06-01

    Color Doppler imaging is an established pulsed ultrasound technique to visualize blood flow non-invasively. High-frame-rate (ultrafast) color Doppler, by emissions of plane or circular wavefronts, allows severalfold increase in frame rates. Conventional and ultrafast color Doppler are both limited by the range-velocity dilemma, which may result in velocity folding (aliasing) for large depths and/or large velocities. We investigated multiple pulse-repetition-frequency (PRF) emissions arranged in a series of staggered intervals to remove aliasing in ultrafast color Doppler. Staggered PRF is an emission process where time delays between successive pulse transmissions change in an alternating way. We tested staggered dual- and triple-PRF ultrafast color Doppler, 1) in vitro in a spinning disc and a free jet flow, and 2) in vivo in a human left ventricle. The in vitro results showed that the Nyquist velocity could be extended to up to 6 times the conventional limit. We found coefficients of determination r(2) ≥ 0.98 between the de-aliased and ground-truth velocities. Consistent de-aliased Doppler images were also obtained in the human left heart. Our results demonstrate that staggered multiple-PRF ultrafast color Doppler is efficient for high-velocity high-frame-rate blood flow imaging. This is particularly relevant for new developments in ultrasound imaging relying on accurate velocity measurements.

  6. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  7. Angular oversampling with temporally offset layers on multilayer detectors in computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjölin, Martin, E-mail: martin.sjolin@mi.physics.kth.se; Danielsson, Mats

    2016-06-15

    Purpose: Today’s computed tomography (CT) scanners operate at an increasingly high rotation speed in order to reduce motion artifacts and to fulfill the requirements of dynamic acquisition, e.g., perfusion and cardiac imaging, with lower angular sampling rate as a consequence. In this paper, a simple method for obtaining angular oversampling when using multilayer detectors in continuous rotation CT is presented. Methods: By introducing temporal offsets between the measurement periods of the different layers on a multilayer detector, the angular sampling rate can be increased by a factor equal to the number of layers on the detector. The increased angular samplingmore » rate reduces the risk of producing aliasing artifacts in the image. A simulation of a detector with two layers is performed to prove the concept. Results: The simulation study shows that aliasing artifacts from insufficient angular sampling are reduced by the proposed method. Specifically, when imaging a single point blurred by a 2D Gaussian kernel, the method is shown to reduce the strength of the aliasing artifacts by approximately an order of magnitude. Conclusions: The presented oversampling method is easy to implement in today’s multilayer detectors and has the potential to reduce aliasing artifacts in the reconstructed images.« less

  8. Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm

    DOE PAGES

    Huang, C. -K.; Zeng, Y.; Wang, Y.; ...

    2016-10-01

    The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less

  9. Atmospheric Pressure Corrections in Geodesy and Oceanography: a Strategy for Handling Air Tides

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.; Ray, Richard D.

    2003-01-01

    Global pressure data are often needed for processing or interpreting modern geodetic and oceanographic measurements. The most common source of these data is the analysis or reanalysis products of various meteorological centers. Tidal signals in these products can be problematic for several reasons, including potentially aliased sampling of the semidiurnal solar tide as well as the presence of various modeling or timing errors. Building on the work of Van den Dool and colleagues, we lay out a strategy for handling atmospheric tides in (re)analysis data. The procedure also offers a method to account for ocean loading corrections in satellite altimeter data that are consistent with standard ocean-tide corrections. The proposed strategy has immediate application to the on-going Jason-1 and GRACE satellite missions.

  10. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  11. Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C. -K.; Zeng, Y.; Wang, Y.

    The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less

  12. Post-Fisherian Experimentation: From Physical to Virtual

    DOE PAGES

    Jeff Wu, C. F.

    2014-04-24

    Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.

  13. Determining Aliasing in Isolated Signal Conditioning Modules

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The basic concept of aliasing is this: Converting analog data into digital data requires sampling the signal at a specific rate, known as the sampling frequency. The result of this conversion process is a new function, which is a sequence of digital samples. This new function has a frequency spectrum, which contains all the frequency components of the original signal. The Fourier transform mathematics of this process show that the frequency spectrum of the sequence of digital samples consists of the original signal s frequency spectrum plus the spectrum shifted by all the harmonics of the sampling frequency. If the original analog signal is sampled in the conversion process at a minimum of twice the highest frequency component contained in the analog signal, and if the reconstruction process is limited to the highest frequency of the original signal, then the reconstructed signal accurately duplicates the original analog signal. It is this process that can give birth to aliasing.

  14. Simulation of sampling effects in FPAs

    NASA Astrophysics Data System (ADS)

    Cook, Thomas H.; Hall, Charles S.; Smith, Frederick G.; Rogne, Timothy J.

    1991-09-01

    The use of multiplexers and large focal plane arrays in advanced thermal imaging systems has drawn renewed attention to sampling and aliasing issues in imaging applications. As evidenced by discussions in a recent workshop, there is no clear consensus among experts whether aliasing in sensor designs can be readily tolerated, or must be avoided at all cost. Further, there is no straightforward, analytical method that can answer the question, particularly when considering image interpreters as different as humans and autonomous target recognizers (ATR). However, the means exist for investigating sampling and aliasing issues through computer simulation. The U.S. Army Tank-Automotive Command (TACOM) Thermal Image Model (TTIM) provides realistic sensor imagery that can be evaluated by both human observers and TRs. This paper briefly describes the history and current status of TTIM, explains the simulation of FPA sampling effects, presents validation results of the FPA sensor model, and demonstrates the utility of TTIM for investigating sampling effects in imagery.

  15. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  16. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  17. A new unified approach to determine geocentre motion using space geodetic and GRACE gravity data

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Kusche, Jürgen; Landerer, Felix W.

    2017-06-01

    Geocentre motion between the centre-of-mass of the Earth system and the centre-of-figure of the solid Earth surface is a critical signature of degree-1 components of global surface mass transport process that includes sea level rise, ice mass imbalance and continental-scale hydrological change. To complement GRACE data for complete-spectrum mass transport monitoring, geocentre motion needs to be measured accurately. However, current methods of geodetic translational approach and global inversions of various combinations of geodetic deformation, simulated ocean bottom pressure and GRACE data contain substantial biases and systematic errors. Here, we demonstrate a new and more reliable unified approach to geocentre motion determination using a recently formed satellite laser ranging based geocentric displacement time-series of an expanded geodetic network of all four space geodetic techniques and GRACE gravity data. The unified approach exploits both translational and deformational signatures of the displacement data, while the addition of GRACE's near global coverage significantly reduces biases found in the translational approach and spectral aliasing errors in the inversion.

  18. A novel x-ray detector design with higher DQE and reduced aliasing: Theoretical analysis of x-ray reabsoprtion in detector converter material

    NASA Astrophysics Data System (ADS)

    Nano, Tomi; Escartin, Terenz; Karim, Karim S.; Cunningham, Ian A.

    2016-03-01

    The ability to improve visualization of structural information in digital radiography without increasing radiation exposures requires improved image quality across all spatial frequencies, especially at high frequencies. The detective quantum efficiency (DQE) as a function of spatial frequency quantifies image quality given by an x-ray detector. We present a method of increasing DQE at high spatial frequencies by improving the modulation transfer function (MTF) and reducing noise aliasing. The Apodized Aperature Pixel (AAP) design uses a detector with micro-elements to synthesize desired pixels and provide higher DQE than conventional detector designs. A cascaded system analysis (CSA) that incorporates x-ray interactions is used for comparison of the theoretical MTF, noise power spectrum (NPS), and DQE. Signal and noise transfer through the converter material is shown to consist of correlated an uncorrelated terms. The AAP design was shown to improve the DQE of both material types that have predominantly correlated transfer (such as CsI) and predominantly uncorrelated transfer (such as Se). Improvement in the MTF by 50% and the DQE by 100% at the sampling cut-off frequency is obtained when uncorrelated transfer is prevalent through the converter material. Optimizing high-frequency DQE results in improved image contrast and visualization of small structures and fine-detail.

  19. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  20. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  1. On the sensitivity of transtensional versus transpressional tectonic regimes to remote dynamic triggering by Coulomb failure

    USGS Publications Warehouse

    Hill, David P.

    2015-01-01

     Accumulating evidence, although still strongly spatially aliased, indicates that although remote dynamic triggering of small-to-moderate (Mw<5) earthquakes can occur in all tectonic settings, transtensional stress regimes with normal and subsidiary strike-slip faulting seem to be more susceptible to dynamic triggering than transpressional regimes with reverse and subsidiary strike-slip faulting. Analysis of the triggering potential of Love- and Rayleigh-wave dynamic stresses incident on normal, reverse, and strike-slip faults assuming Andersonian faulting theory and simple Coulomb failure supports this apparent difference for rapid-onset triggering susceptibility.

  2. VizieR Online Data Catalog: Hα velocity curves of IM Eri (Armstrong+, 2013)

    NASA Astrophysics Data System (ADS)

    Armstrong, E.; Patterson, J.; Michelsen, E.; Thorstensen, J.; Uthas, H.; Vanmunster, T.; Hambsch, F.-J.; Roberts, G.; Dvorak, S.

    2015-01-01

    All data reported here were obtained by the globally distributed small telescopes of the Center for Backyard Astrophysics [see Skillman & Patterson (1993ApJ...417..298S) for details of the CBA instrumentation and observing procedure]. We obtained differential photometry of the CV with respect to a comparison star on the same field, and spliced overlapping data from different longitudes by adding small constants to establish a consistent instrumental scale. With an excellent span of longitudes, we essentially eliminated the possibility of daily aliasing of frequencies in the power spectra. In order to reach good signal-to-noise ratio with good time resolution, we generally observe in unfiltered light. This practice, however, eliminates the possibility of transforming to a standard magnitude. (2 data files).

  3. Graphics processing unit (GPU) real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  4. Event Compression Using Recursive Least Squares Signal Processing.

    DTIC Science & Technology

    1980-07-01

    decimation of the Burstl signal with and without all-pole prefiltering to reduce aliasing . Figures 3.32a-c and 3.33a-c show the same examples but with 4/1...to reduce aliasing , w~t found that it did not improve the quality of the event compressed signals . If filtering must be performed, all-pole filtering...A-AO89 785 MASSACHUSETTS IN T OF TECH CAMBRIDGE RESEARCH LAB OF--ETC F/B 17/9 EVENT COMPRESSION USING RECURSIVE LEAST SQUARES SIGNAL PROCESSI-ETC(t

  5. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  6. Some aspects of simultaneously flying Topex Follow-On in a Topex orbit with Geosat Follow-On in a Geosat orbit

    NASA Technical Reports Server (NTRS)

    Parke, Michael E.; Born, George; Mclaughlin, Craig

    1994-01-01

    The advantages of having Geosat Follow-On in a Geosat orbit flying simultaneously with Topex Follow-On in a Topex/Poseidon orbit are examined. The orbits are evaluated using two criteria. The first is the acute crossover angle. This angle should be at least 40 degrees in order to accurately resolve the slope of sea level at crossover locations. The second is tidal aliasing. In order to solve for tides, the largest constituents should not be aliased to a frequency lower than two cycles/year and should be at least one cycle discrete from one another and from exactly two cycles/year over the mission life. The results show that TFO and GFO in these orbits complement each other. Both satellites have large crossover angles over a wide latitude range. In addition, the Topex orbit has good aliasing characteristics for the M2 and P1 tides for which the Geosat orbit has difficulty.

  7. Harmonic analysis of electrified railway based on improved HHT

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    In this paper, the causes and harms of the current electric locomotive electrical system harmonics are firstly studied and analyzed. Based on the characteristics of the harmonics in the electrical system, the Hilbert-Huang transform method is introduced. Based on the in-depth analysis of the empirical mode decomposition method and the Hilbert transform method, the reasons and solutions to the endpoint effect and modal aliasing problem in the HHT method are explored. For the endpoint effect of HHT, this paper uses point-symmetric extension method to extend the collected data; In allusion to the modal aliasing problem, this paper uses the high frequency harmonic assistant method to preprocess the signal and gives the empirical formula of high frequency auxiliary harmonic. Finally, combining the suppression of HHT endpoint effect and modal aliasing problem, an improved HHT method is proposed and simulated by matlab. The simulation results show that the improved HHT is effective for the electric locomotive power supply system.

  8. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  9. Demonstrating the Value of Fine-resolution Optical Data for Minimising Aliasing Impacts on Biogeochemical Models of Surface Waters

    NASA Astrophysics Data System (ADS)

    Chappell, N. A.; Jones, T.; Young, P.; Krishnaswamy, J.

    2015-12-01

    There is increasing awareness that under-sampling may have resulted in the omission of important physicochemical information present in water quality signatures of surface waters - thereby affecting interpretation of biogeochemical processes. For dissolved organic carbon (DOC) and nitrogen this under-sampling can now be avoided using UV-visible spectroscopy measured in-situ and continuously at a fine-resolution e.g. 15 minutes ("real time"). Few methods are available to extract biogeochemical process information directly from such high-frequency data. Jones, Chappell & Tych (2014 Environ Sci Technol: 13289-97) developed one such method using optically-derived DOC data based upon a sophisticated time-series modelling tool. Within this presentation we extend the methodology to quantify the minimum sampling interval required to avoid distortion of model structures and parameters that describe fundamental biogeochemical processes. This shifting of parameters which results from under-sampling is called "aliasing". We demonstrate that storm dynamics at a variety of sites dominate over diurnal and seasonal changes and that these must be characterised by sampling that may be sub-hourly to avoid aliasing. This is considerably shorter than that used by other water quality studies examining aliasing (e.g. Kirchner 2005 Phys Rev: 069902). The modelling approach presented is being developed into a generic tool to calculate the minimum sampling for water quality monitoring in systems driven primarily by hydrology. This is illustrated with fine-resolution, optical data from watersheds in temperate Europe through to the humid tropics.

  10. Magnetic Moment Quantifications of Small Spherical Objects in MRI

    PubMed Central

    Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2014-01-01

    Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517

  11. Magnetic moment quantifications of small spherical objects in MRI.

    PubMed

    Cheng, Yu-Chung N; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2015-07-01

    The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values, and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  13. Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.

    2016-12-01

    Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. To connect the analytical Sturm-Liouville results to observations, a first-order forcing consistent with a seasonally aliased Draconic or nodal lunar period (27.21d aliased into 2.36y) is applied. This has a plausible rationale as it ties a latitudinal forcing cycle via a cross-product to the longitudinal terms in the Laplace formulation. The fitted results match the features of QBO both qualitatively and quantitatively; adding second-order terms due to other seasonally aliased lunar periods provides finer detail while remaining consistent with the physical model. Further, running symbolic regression machine learning experiments on the data provided a validation to the approach, as it discovered the same analytical form and fitted values as the first principles Laplace model. These results conflict with Lindzen's QBO model, in that his original formulation fell short of making the lunar connection, even though Lindzen himself asserted "it is unlikely that lunar periods could be produced by anything other than the lunar tidal potential".By applying a similar analytical approach to ENSO, we find that the tidal equations need to be replaced with a Mathieu-equation formulation consistent with describing a sloshing process in the thermocline depth. Adapting the hydrodynamic math of sloshing, we find a biennial modulation coupled with angular momentum forcing variations matching the Chandler wobble gives an impressive match over the measured ENSO range of 1880 until the present. Lunar tidal periods and an additional triaxial nutation of 14 year period provide additional fidelity. The caveat is a phase inversion of the biennial mode lasting from 1980 to 1996. The parsimony of these analytical models arises from applying only known cyclic forcing terms to fundamental wave equation formulations. This raises the possibility that both QBO and ENSO can be predicted years in advance, apart from a metastable biennial phase inversion in ENSO.

  14. a Climatology of Global Precipitation.

    NASA Astrophysics Data System (ADS)

    Legates, David Russell

    A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.

  15. Spectral analysis of highly aliased sea-level signals

    NASA Astrophysics Data System (ADS)

    Ray, Richard D.

    1998-10-01

    Observing high-wavenumber ocean phenomena with a satellite altimeter generally calls for "along-track" analyses of the data: measurements along a repeating satellite ground track are analyzed in a point-by-point fashion, as opposed to spatially averaging data over multiple tracks. The sea-level aliasing problems encountered in such analyses can be especially challenging. For TOPEX/POSEIDON, all signals with frequency greater than 18 cycles per year (cpy), including both tidal and subdiurnal signals, are folded into the 0-18 cpy band. Because the tidal bands are wider than 18 cpy, residual tidal cusp energy, plus any subdiurnal energy, is capable of corrupting any low-frequency signal of interest. The practical consequences of this are explored here by using real sea-level measurements from conventional tide gauges, for which the true oceanographic spectrum is known and to which a simulated "satellite-measured" spectrum, based on coarsely subsampled data, may be compared. At many locations the spectrum is sufficently red that interannual frequencies remain unaffected. Intra-annual frequencies, however, must be interpreted with greater caution, and even interannual frequencies can be corrupted if the spectrum is flat. The results also suggest that whenever tides must be estimated directly from the altimetry, response methods of analysis are preferable to harmonic methods, even in nonlinear regimes; this will remain so for the foreseeable future. We concentrate on three example tide gauges: two coastal stations on the Malay Peninsula where the closely aliased K1 and Ssa tides are strong and at Canton Island where trapped equatorial waves are aliased.

  16. Blending of phased array data

    NASA Astrophysics Data System (ADS)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  17. Identifying technical aliases in SELDI mass spectra of complex mixtures of proteins

    PubMed Central

    2013-01-01

    Background Biomarker discovery datasets created using mass spectrum protein profiling of complex mixtures of proteins contain many peaks that represent the same protein with different charge states. Correlated variables such as these can confound the statistical analyses of proteomic data. Previously we developed an algorithm that clustered mass spectrum peaks that were biologically or technically correlated. Here we demonstrate an algorithm that clusters correlated technical aliases only. Results In this paper, we propose a preprocessing algorithm that can be used for grouping technical aliases in mass spectrometry protein profiling data. The stringency of the variance allowed for clustering is customizable, thereby affecting the number of peaks that are clustered. Subsequent analysis of the clusters, instead of individual peaks, helps reduce difficulties associated with technically-correlated data, and can aid more efficient biomarker identification. Conclusions This software can be used to pre-process and thereby decrease the complexity of protein profiling proteomics data, thus simplifying the subsequent analysis of biomarkers by decreasing the number of tests. The software is also a practical tool for identifying which features to investigate further by purification, identification and confirmation. PMID:24010718

  18. Super-resolution for imagery from integrated microgrid polarimeters.

    PubMed

    Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M

    2011-07-04

    Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.

  19. Interpretation of aeromagnetic data over Abeokuta and its environs, Southwest Nigeria, using spectral analysis (Fourier transform technique)

    NASA Astrophysics Data System (ADS)

    Olurin, Oluwaseun T.; Ganiyu, Saheed A.; Hammed, Olaide S.; Aluko, Taiwo J.

    2016-10-01

    This study presents the results of spectral analysis of magnetic data over Abeokuta area, Southwestern Nigeria, using fast Fourier transform (FFT) in Microsoft Excel. The study deals with the quantitative interpretation of airborne magnetic data (Sheet No. 260), which was conducted by the Nigerian Geological Survey Agency in 2009. In order to minimise aliasing error, the aeromagnetic data was gridded at spacing of 1 km. Spectral analysis technique was used to estimate the magnetic basement depth distributed at two levels. The result of the interpretation shows that the magnetic sources are mainly distributed at two levels. The shallow sources (minimum depth) range in depth from 0.103 to 0.278 km below ground level and are inferred to be due to intrusions within the region. The deeper sources (maximum depth) range in depth from 2.739 to 3.325 km below ground and are attributed to the underlying basement.

  20. A simulation for gravity fine structure recovery from high-low GRAVSAT SST data

    NASA Technical Reports Server (NTRS)

    Estes, R. H.; Lancaster, E. R.

    1976-01-01

    Covariance error analysis techniques were applied to investigate estimation strategies for the high-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. Surface density blocks of 5 deg x 5 deg and 2 1/2 deg x 2 1/2 deg resolution were utilized to represent the high order geopotential with the drag-free GRAVSAT configured in a nearly circular polar orbit at 250 km. altitude. GEOPAUSE and geosynchronous satellites were considered as high relay spacecraft. It is demonstrated that knowledge of gravitational fine structure can be significantly improved at 5 deg x 5 deg resolution using SST data from a high-low configuration with reasonably accurate orbits for the low GRAVSAT. The gravity fine structure recoverability of the high-low SST mission is compared with the low-low configuration and shown to be superior.

  1. Scanning wind-vector scatterometers with two pencil beams

    NASA Technical Reports Server (NTRS)

    Kirimoto, T.; Moore, R. K.

    1984-01-01

    A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.

  2. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  3. Spectral decontamination of a real-time helicopter simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    Nonlinear mathematical models of a rotor system, referred to as rotating blade-element models, produce steady-state, high-frequency harmonics of significant magnitude. In a discrete simulation model, certain of these harmonics may be incompatible with realistic real-time computational constraints because of their aliasing into the operational low-pass region. However, the energy is an aliased harmonic may be suppressed by increasing the computation rate of an isolated, causal nonlinearity and using an appropriate filter. This decontamination technique is applied to Sikorsky's real-time model of the Black Hawk helicopter, as supplied to NASA for handling-qualities investigations.

  4. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.

  5. Dynamic change in mitral regurgitant orifice area: comparison of color Doppler echocardiographic and electromagnetic flowmeter-based methods in a chronic animal model.

    PubMed

    Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J

    1995-08-01

    The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.

  6. Influence of running stride frequency in heart rate variability analysis during treadmill exercise testing.

    PubMed

    Bailón, Raquel; Garatachea, Nuria; de la Iglesia, Ignacio; Casajús, Jose Antonio; Laguna, Pablo

    2013-07-01

    The analysis and interpretation of heart rate variability (HRV) during exercise is challenging not only because of the nonstationary nature of exercise, the time-varying mean heart rate, and the fact that respiratory frequency exceeds 0.4 Hz, but there are also other factors, such as the component centered at the pedaling frequency observed in maximal cycling tests, which may confuse the interpretation of HRV analysis. The objectives of this study are to test the hypothesis that a component centered at the running stride frequency (SF) appears in the HRV of subjects during maximal treadmill exercise testing, and to study its influence in the interpretation of the low-frequency (LF) and high-frequency (HF) components of HRV during exercise. The HRV of 23 subjects during maximal treadmill exercise testing is analyzed. The instantaneous power of different HRV components is computed from the smoothed pseudo-Wigner-Ville distribution of the modulating signal assumed to carry information from the autonomic nervous system, which is estimated based on the time-varying integral pulse frequency modulation model. Besides the LF and HF components, the appearance is revealed of a component centered at the running SF as well as its aliases. The power associated with the SF component and its aliases represents 22±7% (median±median absolute deviation) of the total HRV power in all the subjects. Normalized LF power decreases as the exercise intensity increases, while normalized HF power increases. The power associated with the SF does not change significantly with exercise intensity. Consideration of the running SF component and its aliases is very important in HRV analysis since stride frequency aliases may overlap with LF and HF components.

  7. Fast algorithm for the rendering of three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1994-02-01

    It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.

  8. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  9. Acquisition of a full-resolution image and aliasing reduction for a spatially modulated imaging polarimeter with two snapshots

    PubMed Central

    Zhang, Jing; Yuan, Changan; Huang, Guohua; Zhao, Yinjun; Ren, Wenyi; Cao, Qizhi; Li, Jianying; Jin, Mingwu

    2018-01-01

    A snapshot imaging polarimeter using spatial modulation can encode four Stokes parameters allowing instantaneous polarization measurement from a single interferogram. However, the reconstructed polarization images could suffer a severe aliasing signal if the high-frequency component of the intensity image is prominent and occurs in the polarization channels, and the reconstructed intensity image also suffers reduction of spatial resolution due to low-pass filtering. In this work, a method using two anti-phase snapshots is proposed to address the two problems simultaneously. The full-resolution target image and the pure interference fringes can be obtained from the sum and the difference of the two anti-phase interferograms, respectively. The polarization information reconstructed from the pure interference fringes does not contain the aliasing signal from the high-frequency component of the object intensity image. The principles of the method are derived and its feasibility is tested by both computer simulation and a verification experiment. This work provides a novel method for spatially modulated imaging polarization technology with two snapshots to simultaneously reconstruct a full-resolution object intensity image and high-quality polarization components. PMID:29714224

  10. Sampling and Reconstruction of the Pupil and Electric Field for Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Smith, Jeffrey; Aronstein, David

    2012-01-01

    This technology is based on sampling considerations for a band-limited function, which has application to optical estimation generally, and to phase retrieval specifically. The analysis begins with the observation that the Fourier transform of an optical aperture function (pupil) can be implemented with minimal aliasing for Q values down to Q = 1. The sampling ratio, Q, is defined as the ratio of the sampling frequency to the band-limited cut-off frequency. The analytical results are given using a 1-d aperture function, and with the electric field defined by the band-limited sinc(x) function. Perfect reconstruction of the Fourier transform (electric field) is derived using the Whittaker-Shannon sampling theorem for 1

  11. Recovery of an evolving magnetic flux rope in the solar wind: Decomposing spatial and temporal variations from single-spacecraft data

    NASA Astrophysics Data System (ADS)

    Hasegawa, H.; Sonnerup, B.; Hu, Q.; Nakamura, T.

    2013-12-01

    We present a novel single-spacecraft data analysis method for decomposing spatial and temporal variations of physical quantities at points along the path of a spacecraft in spacetime. The method is designed for use in the reconstruction of slowly evolving two-dimensional, magneto-hydrostatic structures (Grad-Shafranov equilibria) in a space plasma. It is an extension of the one developed by Sonnerup and Hasegawa [2010] and Hasegawa et al. [2010], in which it was assumed that variations in the time series of data, recorded as the structures move past the spacecraft, are all due to spatial effects. In reality, some of the observed variations are usually caused by temporal evolution of the structure during the time it moves past the observing spacecraft; the information in the data about the spatial structure is aliased by temporal effects. The purpose here is to remove this time aliasing from the reconstructed maps of field and plasma properties. Benchmark tests are performed by use of synthetic data taken by a virtual spacecraft as it traverses, at a constant velocity, a slowly growing magnetic flux rope in a two-dimensional magnetohydrodynamic simulation of magnetic reconnection. These tests show that the new method can better recover the spacetime behavior of the flux rope than does the original version, in which time aliasing effects had not been removed. An application of the new method to a solar wind flux rope, observed by the ACE spacecraft, suggests that it was evolving in a significant way during the ~17 hour interval of the traversal. References Hasegawa, H., B. U. Ö. Sonnerup, and T. K. M. Nakamura (2010), Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event, J. Geophys. Res., 115, A11219, doi:10.1029/2010JA015679. Sonnerup, B. U. Ö., and H. Hasegawa (2010), On slowly evolving Grad-Shafranov equilibria, J. Geophys. Res., 115, A11218, doi:10.1029/2010JA015678. Magnetic field maps recovered from (a) the aliased (original) and (b) de-aliased (new) versions of the time evolution method. Colors show the out-of-plane (z) magnetic field component, and white arrows at points along y = 0 show the transverse velocities obtained from the reconstruction. The blue diamonds in panels (b) mark the location of the ACE spacecraft.

  12. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.

  13. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  14. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  15. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  16. Spurious One-Month and One-Year Periods in Visual Observations of Variable Stars

    NASA Astrophysics Data System (ADS)

    Percy, J. R.

    2015-12-01

    Visual observations of variable stars, when time-series analyzed with some algorithms such as DC-DFT in vstar, show spurious periods at or close to one synodic month (29.5306 days), and also at about a year, with an amplitude of typically a few hundredths of a magnitude. The one-year periods have been attributed to the Ceraski effect, which was believed to be a physiological effect of the visual observing process. This paper reports on time-series analysis, using DC-DFT in vstar, of visual observations (and in some cases, V observations) of a large number of stars in the AAVSO International Database, initially to investigate the one-month periods. The results suggest that both the one-month and one-year periods are actually due to aliasing of the stars' very low-frequency variations, though they do not rule out very low-amplitude signals (typically 0.01 to 0.02 magnitude) which may be due to a different process, such as a physiological one. Most or all of these aliasing effects may be avoided by using a different algorithm, which takes explicit account of the window function of the data, and/or by being fully aware of the possible presence of and aliasing by very low-frequency variations.

  17. Unilateral contact induced blade/casing vibratory interactions in impellers: Analysis for rigid casings

    NASA Astrophysics Data System (ADS)

    Batailly, Alain; Meingast, Markus; Legrand, Mathias

    2015-02-01

    This contribution addresses the vibratory analysis of unilateral-contact induced structural interactions between a bladed impeller and its surrounding rigid casing. Such assemblies can be found in helicopter or small aircraft engines for instance and the interactions of interest shall arise due to the always tighter operating clearances between the rotating and stationary components. The investigation is conducted by extending to cyclically symmetric structures an in-house time-marching based tool dedicated to unilateral contact occurrences in turbomachines. The main components of the considered impeller together with the associated assumptions and modelling principles considered in this work are detailed. Typical dynamical features of cyclically symmetric structures, such as the aliasing effect and frequency clustering are explored in this nonlinear framework by means of thorough frequency-domain analyses and harmonic trackings of the numerically predicted impeller displacements. Additional contact maps highlight the existence of critical rotational velocities at which displacements potentially reach high amplitudes due to the synchronization of the bladed assembly vibratory pattern with the shape of the rigid casing. The proposed numerical investigations are also compared to a simpler and (almost) empirical criterion: it is suggested, based on nonlinear numerical simulations with a linear reduced order model of the impeller and a rigid casing, that this criterion may miss important critical velocities emanating from the unfavorable combination of aliasing and contact-induced higher harmonics in the vibratory response of the impeller. Overall, this work suggests a way to enhance guidelines to improve the design of impellers in the context of nonlinear and nonsmooth dynamics.

  18. Transesophageal color Doppler evaluation of obstructive lesions using the new "Quasar" technology.

    PubMed

    Fan, P; Nanda, N C; Gatewood, R P; Cape, E G; Yoganathan, A P

    1995-01-01

    Due to the unavoidable problem of aliasing, color flow signals from high blood flow velocities cannot be measured directly by conventional color Doppler. A new technology termed Quantitative Un-Aliased Speed Algorithm Recognition (Quasar) has been developed to overcome this limitation. Employing this technology, we used transesophageal color Doppler echocardiography to investigate whether the velocities detected by the Quasar would correlate with those obtained by continuous-wave Doppler both in vitro and in vivo. In the in vitro study, a 5.0 MHz transesophageal transducer of a Kontron Sigma 44 color Doppler flow system was used. Fourteen different peak velocities calculated and recorded by color Doppler-guided continuous-wave Doppler were randomly selected. In the clinical study, intraoperative transesophageal echocardiography was performed using the same transducer 18 adults (13 aortic valve stenosis, 2 aortic and 2 mitral stenosis, 2 hypertrophic obstructive cardiomyopathy and 1 mitral valve stenosis). Following each continuous-wave Doppler measurement, the Quasar was activated, and a small Quasar marker was placed in the brightest area of the color flow jet to obtain the maximum mean velocity readout. The maximum mean velocities measured by Quasar closely correlated with maximum peak velocities obtained by color flow guided continuous-wave Doppler in both in vitro (0.53 to 1.65 m/s, r = 0.99) and in vivo studies (1.50 to 6.01 m/s, r = 0.97). We conclude that the new Quasar technology can accurately measure high blood flow velocities during transesophageal color Doppler echocardiography. This technique has the potential of obviating the need for continuous-wave Doppler.

  19. On a Recent Preliminary Study for the Measurement of the Lense-Thirring Effect with the Galileo Satellites

    NASA Astrophysics Data System (ADS)

    Iorio, L.

    2014-01-01

    It has recently been proposed to combine the node drifts of the future constellation of 27 Galileo spacecraft together with those of the existing Laser Geodynamics Satellites (LAGEOS)-type satellites to improve the accuracy of the past and ongoing tests of the Lense-Thirring (LT) effect by removing the bias of a larger number of even zonal harmonics Jℓ than either done or planned so far. Actually, it seems a difficult goal to be achieved realistically for a number of reasons. First, the LT range signature of a Galileo-type satellite is as small as 0.5 mm over three-days arcs, corresponding to a node rate of just ˙ Ω LT = 2 milliarcseconds per year (mas yr-1). Some tesseral and sectorial ocean tides such as K1 and K2 induce long-period harmonic node perturbations with frequencies which are integer multiples of the extremely slow Galileo's node rate ˙ Ω completing a full cycle in about 40 yr. Thus, over time spans, T, of some years, they would act as superimposed semisecular aliasing trends. Since the coefficients of the Jℓ-free multisatellite linear combinations are determined only by the semimajor axis a, the eccentricity e and the inclination I, which are nominally equal for all the Galileo satellites, it is not possible to include all of them. Even using only one Galileo spacecraft together with the LAGEOS family would be unfeasible because of the fact that the resulting Galileo coefficient would be ≳ 1, thus enhancing the aliasing impact of the uncancelled nonconservative and tidal perturbations.

  20. Interleaved EPI based fMRI improved by multiplexed sensitivity encoding (MUSE) and simultaneous multi-band imaging.

    PubMed

    Chang, Hing-Chiu; Gaur, Pooja; Chou, Ying-hui; Chu, Mei-Lan; Chen, Nan-kuei

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive and powerful imaging tool for detecting brain activities. The majority of fMRI studies are performed with single-shot echo-planar imaging (EPI) due to its high temporal resolution. Recent studies have demonstrated that, by increasing the spatial-resolution of fMRI, previously unidentified neuronal networks can be measured. However, it is challenging to improve the spatial resolution of conventional single-shot EPI based fMRI. Although multi-shot interleaved EPI is superior to single-shot EPI in terms of the improved spatial-resolution, reduced geometric distortions, and sharper point spread function (PSF), interleaved EPI based fMRI has two main limitations: 1) the imaging throughput is lower in interleaved EPI; 2) the magnitude and phase signal variations among EPI segments (due to physiological noise, subject motion, and B0 drift) are translated to significant in-plane aliasing artifact across the field of view (FOV). Here we report a method that integrates multiple approaches to address the technical limitations of interleaved EPI-based fMRI. Firstly, the multiplexed sensitivity-encoding (MUSE) post-processing algorithm is used to suppress in-plane aliasing artifacts resulting from time-domain signal instabilities during dynamic scans. Secondly, a simultaneous multi-band interleaved EPI pulse sequence, with a controlled aliasing scheme incorporated, is implemented to increase the imaging throughput. Thirdly, the MUSE algorithm is then generalized to accommodate fMRI data obtained with our multi-band interleaved EPI pulse sequence, suppressing both in-plane and through-plane aliasing artifacts. The blood-oxygenation-level-dependent (BOLD) signal detectability and the scan throughput can be significantly improved for interleaved EPI-based fMRI. Our human fMRI data obtained from 3 Tesla systems demonstrate the effectiveness of the developed methods. It is expected that future fMRI studies requiring high spatial-resolvability and fidelity will largely benefit from the reported techniques.

  1. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  2. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  3. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE PAGES

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...

    2016-12-05

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  4. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  5. Lattice functions, wavelet aliasing, and SO(3) mappings of orthonormal filters

    NASA Astrophysics Data System (ADS)

    John, Sarah

    1998-01-01

    A formulation of multiresolution in terms of a family of dyadic lattices {Sj;j∈Z} and filter matrices Mj⊂U(2)⊂GL(2,C) illuminates the role of aliasing in wavelets and provides exact relations between scaling and wavelet filters. By showing the {DN;N∈Z+} collection of compactly supported, orthonormal wavelet filters to be strictly SU(2)⊂U(2), its representation in the Euler angles of the rotation group SO(3) establishes several new results: a 1:1 mapping of the {DN} filters onto a set of orbits on the SO(3) manifold; an equivalence of D∞ to the Shannon filter; and a simple new proof for a criterion ruling out pathologically scaled nonorthonormal filters.

  6. Blind Compressed Sensing Enables 3-Dimensional Dynamic Free Breathing Magnetic Resonance Imaging of Lung Volumes and Diaphragm Motion.

    PubMed

    Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews

    2016-06-01

    The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.

  7. Nonlinear Dot Plots.

    PubMed

    Rodrigues, Nils; Weiskopf, Daniel

    2018-01-01

    Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.

  8. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  9. Reconstruction of dynamic image series from undersampled MRI data using data-driven model consistency condition (MOCCO).

    PubMed

    Velikina, Julia V; Samsonov, Alexey A

    2015-11-01

    To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.

  10. RECONSTRUCTION OF DYNAMIC IMAGE SERIES FROM UNDERSAMPLED MRI DATA USING DATA-DRIVEN MODEL CONSISTENCY CONDITION (MOCCO)

    PubMed Central

    Velikina, Julia V.; Samsonov, Alexey A.

    2014-01-01

    Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724

  11. Color, contrast sensitivity, and the cone mosaic.

    PubMed Central

    Williams, D; Sekiguchi, N; Brainard, D

    1993-01-01

    This paper evaluates the role of various stages in the human visual system in the detection of spatial patterns. Contrast sensitivity measurements were made for interference fringe stimuli in three directions in color space with a psychophysical technique that avoided blurring by the eye's optics including chromatic aberration. These measurements were compared with the performance of an ideal observer that incorporated optical factors, such as photon catch in the cone mosaic, that influence the detection of interference fringes. The comparison of human and ideal observer performance showed that neural factors influence the shape as well as the height of the foveal contrast sensitivity function for all color directions, including those that involve luminance modulation. Furthermore, when optical factors are taken into account, the neural visual system has the same contrast sensitivity for isoluminant stimuli seen by the middle-wavelength-sensitive (M) and long-wavelength-sensitive (L) cones and isoluminant stimuli seen by the short-wavelength-sensitive (S) cones. Though the cone submosaics that feed these chromatic mechanisms have very different spatial properties, the later neural stages apparently have similar spatial properties. Finally, we review the evidence that cone sampling can produce aliasing distortion for gratings with spatial frequencies exceeding the resolution limit. Aliasing can be observed with gratings modulated in any of the three directions in color space we used. We discuss mechanisms that prevent aliasing in most ordinary viewing conditions. Images Fig. 1 Fig. 8 PMID:8234313

  12. Off-resonance suppression for multispectral MR imaging near metallic implants.

    PubMed

    den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens

    2015-01-01

    Metal artifact reduction in MRI within clinically feasible scan-times without through-plane aliasing. Existing metal artifact reduction techniques include view angle tilting (VAT), which resolves in-plane distortions, and multispectral imaging (MSI) techniques, such as slice encoding for metal artifact correction (SEMAC) and multi-acquisition with variable resonances image combination (MAVRIC), that further reduce image distortions, but significantly increase scan-time. Scan-time depends on anatomy size and anticipated total spectral content of the signal. Signals outside the anticipated spatial region may cause through-plane back-folding. Off-resonance suppression (ORS), using different gradient amplitudes for excitation and refocusing, is proposed to provide well-defined spatial-spectral selectivity in MSI to allow scan-time reduction and flexibility of scan-orientation. Comparisons of MSI techniques with and without ORS were made in phantom and volunteer experiments. Off-resonance suppressed SEMAC (ORS-SEMAC) and outer-region suppressed MAVRIC (ORS-MAVRIC) required limited through-plane phase encoding steps compared with original MSI. Whereas SEMAC (scan time: 5'46") and MAVRIC (4'12") suffered from through-plane aliasing, ORS-SEMAC and ORS-MAVRIC allowed alias-free imaging in the same scan-times. ORS can be used in MSI to limit the selected spatial-spectral region and contribute to metal artifact reduction in clinically feasible scan-times while avoiding slice aliasing. © 2014 Wiley Periodicals, Inc.

  13. The N/Rev phenomenon in simulating a blade-element rotor system

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    When a simulation model produces frequencies that are beyond the bandwidth of a discrete implementation, anomalous frequencies appear within the bandwidth. Such is the case with blade element models of rotor systems, which are used in the real time, man in the loop simulation environment. Steady state, high frequency harmonics generated by these models, whether aliased or not, obscure piloted helicopter simulation responses. Since these harmonics are attenuated in actual rotorcraft (e.g., because of structural damping), a faithful environment representation for handling qualities purposes may be created from the original model by using certain filtering techniques, as outlined here. These include harmonic consideration, conventional filtering, and decontamination. The process of decontamination is of special interest because frequencies of importance to simulation operation are not attenuated, whereas superimposed aliased harmonics are.

  14. Signal conditioning units for vibration measurement in HUMS

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Liu, Tingting; Yu, Zirong; Chen, Lijuan; Huang, Xinjie

    2018-03-01

    A signal conditioning units for vibration measurement in HUMS is proposed in the paper. Due to the frequency of vibrations caused by components in helicopter are different, two steps amplifier and programmable anti-aliasing filter are designed to meet the measurement of different types of helicopter. Vibration signals are converted into measurable electrical signals combing with ICP driver firstly. Then pre-amplifier and programmable gain amplifier is applied to magnify the weak electrical signals. In addition, programmable anti-aliasing filter is utilized to filter the interference of noise. The units were tested using function signal generator and oscilloscope. The experimental results have demonstrated the effectiveness of our proposed method in quantitatively and qualitatively. The method presented in this paper can meet the measurement requirement for different types of helicopter.

  15. Luma-chroma space filter design for subpixel-based monochrome image downsampling.

    PubMed

    Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng

    2013-10-01

    In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.

  16. Striping artifact reduction in lunar orbiter mosaic images

    USGS Publications Warehouse

    Mlsna, P.A.; Becker, T.

    2006-01-01

    Photographic images of the moon from the 1960s Lunar Orbiter missions are being processed into maps for visual use. The analog nature of the images has produced numerous artifacts, the chief of which causes a vertical striping pattern in mosaic images formed from a series of filmstrips. Previous methods of stripe removal tended to introduce ringing and aliasing problems in the image data. This paper describes a recently developed alternative approach that succeeds at greatly reducing the striping artifacts while avoiding the creation of ringing and aliasing artifacts. The algorithm uses a one dimensional frequency domain step to deal with the periodic component of the striping artifact and a spatial domain step to handle the aperiodic residue. Several variations of the algorithm have been explored. Results, strengths, and remaining challenges are presented. ?? 2006 IEEE.

  17. Preliminary Examination of Pulse Shapes From GLAS Ocean Returns

    NASA Astrophysics Data System (ADS)

    Swift, T. P.; Minster, B.

    2003-12-01

    We have examined GLAS data collected over the Pacific ocean during the commission phase of the ICESat mission, in an area where sea state is well documented. The data used for this preliminary analysis were acquired during two passes along track 95, on March 18 and 26 of 2003, along the stretch offshore southern California. These dates were chosen for their lack of cloud cover; large (4.0 m) and small (0.7 m) significant wave heights, respectively; and the presence of waves emanating from single distant Pacific storms. Cloud cover may be investigated using MODIS images (http://acdisx.gsfc.nasa.gov/data/dataset/MODIS/), while models of significant wave heights and wave vectors for offshore California are archived by the Coastal Data Information Program (http://cdip.ucsd.edu/cdip_htmls/models.shtml). We find that the shape of deep-ocean GLAS pulse returns is diagnostic of the state of the ocean surface. A calm surface produces near-Gaussian, single-peaked shot returns. In contrast, a rough surface produces blurred shot returns which often feature multiple peaks; these peaks are typically separated by total path lengths on the order of one meter. Gaussian curves fit to rough-water returns are therefore less reliable and lead to greater measurement error; outliers in the ocean surface elevation product are mostly the result of poorly fit low-energy shot returns. Additionally, beat patterns and aliasing artifacts may arise from the sampling of deep-ocean wave trains by GLAS footprints separated by 140m. The apparent wavelength of such patterns depends not only on the wave frequency, but also on the angle between the ICESat ground track and the azimuth of the wave crests. We present a preliminary analysis of such patterns which appears to be consistent with a simple geometrical model.

  18. Fast repurposing of high-resolution stereo video content for mobile use

    NASA Astrophysics Data System (ADS)

    Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas

    2012-06-01

    3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.

  19. High-Resolution Gravity and Time-Varying Gravity Field Recovery using GRACE and CHAMP

    NASA Technical Reports Server (NTRS)

    Shum, C. K.

    2002-01-01

    This progress report summarizes the research work conducted under NASA's Solid Earth and Natural Hazards Program 1998 (SENH98) entitled High Resolution Gravity and Time Varying Gravity Field Recovery Using GRACE (Gravity Recovery and Climate Experiment) and CHAMP (Challenging Mini-satellite Package for Geophysical Research and Applications), which included a no-cost extension time period. The investigation has conducted pilot studies to use the simulated GRACE and CHAMP data and other in situ and space geodetic observable, satellite altimeter data, and ocean mass variation data to study the dynamic processes of the Earth which affect climate change. Results from this investigation include: (1) a new method to use the energy approach for expressing gravity mission data as in situ measurements with the possibility to enhance the spatial resolution of the gravity signal; (2) the method was tested using CHAMP and validated with the development of a mean gravity field model using CHAMP data, (3) elaborate simulation to quantify errors of tides and atmosphere and to recover hydrological and oceanic signals using GRACE, results show that there are significant aliasing effect and errors being amplified in the GRACE resonant geopotential and it is not trivial to remove these errors, and (4) quantification of oceanic and ice sheet mass changes in a geophysical constraint study to assess their contributions to global sea level change, while the results improved significant over the use of previous studies using only the SLR (Satellite Laser Ranging)-determined zonal gravity change data, the constraint could be further improved with additional information on mantle rheology, PGR (Post-Glacial Rebound) and ice loading history. A list of relevant presentations and publications is attached, along with a summary of the SENH investigation generated in 2000.

  20. Receptoral and Neural Aliasing.

    DTIC Science & Technology

    1993-01-30

    standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits

  1. A comparison of earthquake backprojection imaging methods for dense local arrays

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.

  2. Continuously differentiable PIC shape functions for triangular meshes

    DOE PAGES

    Barnes, D. C.

    2018-03-21

    In this study, a new class of continuously-differentiable shape functions is developed and applied to two-dimensional electrostatic PIC simulation on an unstructured simplex (triangle) mesh. It is shown that troublesome aliasing instabilities are avoided for cold plasma simulation in which the Debye length is as small as 0.01 cell sizes. These new shape functions satisfy all requirements for PIC particle shape. They are non-negative, have compact support, and partition unity. They are given explicitly by cubic expressions in the usual triangle logical (areal) coordinates. The shape functions are not finite elements because their structure depends on the topology of themore » mesh, in particular, the number of triangles neighboring each mesh vertex. Nevertheless, they may be useful as approximations to solution of other problems in which continuity of derivatives is required or desired.« less

  3. Continuously differentiable PIC shape functions for triangular meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, D. C.

    In this study, a new class of continuously-differentiable shape functions is developed and applied to two-dimensional electrostatic PIC simulation on an unstructured simplex (triangle) mesh. It is shown that troublesome aliasing instabilities are avoided for cold plasma simulation in which the Debye length is as small as 0.01 cell sizes. These new shape functions satisfy all requirements for PIC particle shape. They are non-negative, have compact support, and partition unity. They are given explicitly by cubic expressions in the usual triangle logical (areal) coordinates. The shape functions are not finite elements because their structure depends on the topology of themore » mesh, in particular, the number of triangles neighboring each mesh vertex. Nevertheless, they may be useful as approximations to solution of other problems in which continuity of derivatives is required or desired.« less

  4. Point target detection utilizing super-resolution strategy for infrared scanning oversampling system

    NASA Astrophysics Data System (ADS)

    Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei

    2017-11-01

    To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.

  5. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  6. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  7. Mapping nonlinear shallow-water tides: a look at the past and future

    NASA Astrophysics Data System (ADS)

    Andersen, Ole B.; Egbert, Gary D.; Erofeeva, Svetlana Y.; Ray, Richard D.

    2006-12-01

    Overtides and compound tides are generated by nonlinear mechanisms operative primarily in shallow waters. Their presence complicates tidal analysis owing to the multitude of new constituents and their possible frequency overlap with astronomical tides. The science of nonlinear tides was greatly advanced by the pioneering researches of Christian Le Provost who employed analytical theory, physical modeling, and numerical modeling in many extensive studies, especially of the tides of the English Channel. Le Provost’s complementary work with satellite altimetry motivates our attempts to merge these two interests. After a brief review, we describe initial steps toward the assimilation of altimetry into models of nonlinear tides via generalized inverse methods. A series of barotropic inverse solutions is computed for the M_4 tide over the northwest European Shelf. Future applications of altimetry to regions with fewer in situ measurements will require improved understanding of error covariance models because these control the tradeoffs between fitting hydrodynamics and data, a delicate issue in coastal regions. While M_4 can now be robustly determined along the Topex/Poseidon satellite ground tracks, many other compound tides face serious aliasing problems.

  8. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  9. Design Considerations for a Dedicated Gravity Recovery Satellite Mission Consisting of Two Pairs of Satellites

    NASA Technical Reports Server (NTRS)

    Wiese, D. N.; Nerem, R. S.; Lemoine, F. G.

    2011-01-01

    Future satellite missions dedicated to measuring time-variable gravity will need to address the concern of temporal aliasing errors; i.e., errors due to high-frequency mass variations. These errors have been shown to be a limiting error source for future missions with improved sensors. One method of reducing them is to fly multiple satellite pairs, thus increasing the sampling frequency of the mission. While one could imagine a system architecture consisting of dozens of satellite pairs, this paper explores the more economically feasible option of optimizing the orbits of two pairs of satellites. While the search space for this problem is infinite by nature, steps have been made to reduce it via proper assumptions regarding some parameters and a large number of numerical simulations exploring appropriate ranges for other parameters. A search space originally consisting of 15 variables is reduced to two variables with the utmost impact on mission performance: the repeat period of both pairs of satellites (shown to be near-optimal when they are equal to each other), as well as the inclination of one of the satellite pairs (the other pair is assumed to be in a polar orbit). To arrive at this conclusion, we assume circular orbits, repeat groundtracks for both pairs of satellites, a 100-km inter-satellite separation distance, and a minimum allowable operational satellite altitude of 290 km based on a projected 10-year mission lifetime. Given the scientific objectives of determining time-variable hydrology, ice mass variations, and ocean bottom pressure signals with higher spatial resolution, we find that an optimal architecture consists of a polar pair of satellites coupled with a pair inclined at 72deg, both in 13-day repeating orbits. This architecture provides a 67% reduction in error over one pair of satellites, in addition to reducing the longitudinal striping to such a level that minimal post-processing is required, permitting a substantial increase in the spatial resolution of the gravity field products. It should be emphasized that given different sets of scientific objectives for the mission, or a different minimum allowable satellite altitude, different architectures might be selected.

  10. Application of digital image processing techniques to astronomical imagery 1980

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1981-01-01

    Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.

  11. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  12. Cosine beamforming

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Wapenaar, Kees

    2014-05-01

    In various application areas, e.g., seismology, astronomy and geodesy, arrays of sensors are used to characterize incoming wavefields due to distant sources. Beamforming is a general term for phased-adjusted summations over the different array elements, for untangling the directionality and elevation angle of the incoming waves. For characterizing noise sources, beamforming is conventionally applied with a temporal Fourier and a 2D spatial Fourier transform, possibly with additional weights. These transforms become aliased for higher frequencies and sparser array-element distributions. As a partial remedy, we derive a kernel for beamforming crosscorrelated data and call it cosine beamforming (CBF). By applying beamforming not directly to the data, but to crosscorrelated data, the sampling is effectively increased. We show that CBF, due to this better sampling, suffers less from aliasing and yields higher resolution than conventional beamforming. As a flip-side of the coin, the CBF output shows more smearing for spherical waves than conventional beamforming.

  13. The effect of split pixel HDR image sensor technology on MTF measurements

    NASA Astrophysics Data System (ADS)

    Deegan, Brian M.

    2014-03-01

    Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.

  14. Optoelectronic image scanning with high spatial resolution and reconstruction fidelity

    NASA Astrophysics Data System (ADS)

    Craubner, Siegfried I.

    2002-02-01

    In imaging systems the detector arrays deliver at the output time-discrete signals, where the spatial frequencies of the object scene are mapped into the electrical signal frequencies. Since the spatial frequency spectrum cannot be bandlimited by the front optics, the usual detector arrays perform a spatial undersampling and as a consequence aliasing occurs. A means to partially suppress the backfolded alias band is bandwidth limitation in the reconstruction low-pass, at the price of resolution loss. By utilizing a bilinear detector array in a pushbroom-type scanner, undersampling and aliasing can be overcome. For modeling the perception, the theory of discrete systems and multirate digital filter banks is applied, where aliasing cancellation and perfect reconstruction play an important role. The discrete transfer function of a bilinear array can be imbedded into the scheme of a second-order filter bank. The detector arrays already build the analysis bank and the overall filter bank is completed with the synthesis bank, for which stabilized inverse filters are proposed, to compensate for the low-pass characteristics and to approximate perfect reconstruction. The synthesis filter branch can be realized in a so-called `direct form,' or the `polyphase form,' where the latter is an expenditure-optimal solution, which gives advantages when implemented in a signal processor. This paper attempts to introduce well-established concepts of the theory of multirate filter banks into the analysis of scanning imagers, which is applicable in a much broader sense than for the problems addressed here. To the author's knowledge this is also a novelty.

  15. Airborne radar imaging of subaqueous channel evolution in Wax Lake Delta, Louisiana, USA

    NASA Astrophysics Data System (ADS)

    Shaw, John B.; Ayoub, Francois; Jones, Cathleen E.; Lamb, Michael P.; Holt, Benjamin; Wagner, R. Wayne; Coffey, Thomas S.; Chadwick, J. Austin; Mohrig, David

    2016-05-01

    Shallow coastal regions are among the fastest evolving landscapes but are notoriously difficult to measure with high spatiotemporal resolution. Using Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) data, we demonstrate that high signal-to-noise L band synthetic aperture radar (SAR) can reveal subaqueous channel networks at the distal ends of river deltas. Using 27 UAVSAR images collected between 2009 and 2015 from the Wax Lake Delta in coastal Louisiana, USA, we show that under normal tidal conditions, planform geometry of the distributary channel network is frequently resolved in the UAVSAR images, including ~700 m of seaward network extension over 5 years for one channel. UAVSAR also reveals regions of subaerial and subaqueous vegetation, streaklines of biogenic surfactants, and what appear to be small distributary channels aliased by the survey grid, all illustrating the value of fine resolution, low noise, L band SAR for mapping the nearshore subaqueous delta channel network.

  16. Effective regurgitant orifice area by the color Doppler flow convergence method for evaluating the severity of chronic aortic regurgitation. An animal study.

    PubMed

    Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J

    1996-02-01

    The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.

  17. T-phase and tsunami signals recorded by IMS hydrophone triplets during the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Matsumoto, H.; Haralabus, G.; Zampolli, M.; Ozel, N. M.; Yamada, T.; Mark, P. K.

    2016-12-01

    A hydrophone station of the International Monitoring System (IMS) of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) is used to estimate the back-azimuth of T-phase signals generated by the 2011 Tohoku earthquake. Among the 6 IMS hydrophone stations required by the Treaty, 5 stations consist of two triplets, with the exception of HA1 (Australia), which has only one. The hydrophones of each triplet are suspended in the SOFAR channel and arranged to form an equilateral triangle with each side being approximately two kilometers long. The waveforms from the Tohoku earthquake were received at HA11, located on Wake Island, which is located approximately 3100 km south-east of the earthquake epicenter. The frequency range used in the array analysis was chosen to be less than 0.375 Hz, which assumed the target phase velocity to be 1.5 km/s for T-phases. The T-phase signals that originated from the seismic source however show peaks in the frequency band above one Hz. As a result of the inter-element distances of 2 km, spatial aliasing is observed in the frequency-wavenumber analysis (F-K analysis) if the entire 100 Hz bandwidth of the hydrophones is used. This spatial aliasing is significant because the distance between hydrophones in the triplet is large in comparison to the ratio between the phase velocity of T-phase signals and the frequency. To circumvent this spatial aliasing problem, a three-step processing technique used in seismic array analysis is applied: (1) high-pass filtering above 1 Hz to retrieve the T-phase, followed by (2) extraction of the envelope of this signal to highlight the T-phase contribution, and finally (3) low-pass filtering of the envelope below 0.375 Hz. The F-K analysis provides accurate back-azimuth and slowness estimations without spatial aliasing. Deconvolved waveforms are also processed to retrieve tsunami components by using a three-pole model of the frequency-amplitude-phase (FAP) response below 0.1 Hz and the measured sensor response for higher frequencies. It is also shown that short-period pressure fluctuations recorded by the IMS hydrophones correspond to theoretical dispersion curves of tsunamis. Thus, short-period dispersive tsunami signals can be identified by the IMS hydrophone triplets.

  18. In-situ Chemical Exploration and Mapping using an Autonomous Underwater Vehicle

    NASA Astrophysics Data System (ADS)

    Camilli, R.; Bingham, B. S.; Jakuba, M.; Whelan, J.; Singh, H.; Whiticar, M.

    2004-12-01

    Recent advances in in-situ chemical sensing have emphasized several issues associated with making reliable chemical measurements in the ocean. Such measurements are often aliased temporally and or spatially, and may suffer from instrumentation artifacts, such as slow response time, limited dynamic range, hysteresis, and environmental sensitivities (eg., temperature and pressure). We focus on the in-situ measurement of light hydrocarbons. Specifically we examine data collected using a number of methods including: a vertical profiler, autonomous underwater vehicles (AUV) surveys, and adaptive spatio-temporal survey techniques. We present data collected using a commercial METS sensor on a vertical profiler to identify and map structures associated with ocean bottom methane sources in the Saanich inlet off Vancouver, Canada. This sensor was deployed in parallel with a submersible mass spectrometer and a shipboard equilibrator-gas chromatograph. Our results illustrate that spatial offsets as small as centimeters can produce significant differences in measured concentration. In addition, differences in response times between instruments can also alias the measurements. The results of this preliminary experiment underscore the challenges of quantifying ocean chemical processes with small-scale spatial variability and temporal variability that is often faster than the response times of many available instruments. We explore the capabilities and current limitations of autonomous underwater vehicles for extending the spatial coverage of new in-situ sensor technologies. We present data collected from deployments of Seabed, a passively stable, hover capable AUV, at large-scale gas blowout features located along the U.S. Atlantic margin. Although these deployments successfully revealed previously unobservable oceanographic processes, temporal aliasing caused by sensor response as well as tidal variability manifests itself, illustrating the possibilities for misinterpretation of localized periodic anomalies. Finally we present results of recent experimental chemical plume mapping surveys that were conducted off the coast of Massachusetts using adaptive behaviors that allow the AUV to optimize its mission plan to autonomously search for chemical anomalies. This adaptive operation is based on coupling the chemical sensor payload within a closed-loop architecture with the vehicle's navigation control system for real-time autonomous data assimilation and decision making processes. This allows the vehicle to autonomously refine the search strategy, thereby improving feature localization capabilities and enabling surveys at an appropriate temporal and spatial resolution.

  19. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance.

    PubMed

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-12-07

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s(-1)) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s(-1)) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s(-1)) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm(-1) of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will lead to delivery of the high dose region at a position slightly posterior to the intended region for a supine patient. When possible, care should be taken to avoid imaging through a thick layer of fat for larger patients in US alignments or, if unavoidable, the spatial inaccuracies introduced by the artifact should be considered by the physician during the formulation of the treatment plan.

  20. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  1. Anisotropic scene geometry resampling with occlusion filling for 3DTV applications

    NASA Astrophysics Data System (ADS)

    Kim, Jangheon; Sikora, Thomas

    2006-02-01

    Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.

  2. Azimuthal filter to attenuate ground roll noise in the F-kx-ky domain for land 3D-3C seismic data with uneven acquisition geometry

    NASA Astrophysics Data System (ADS)

    Arevalo-Lopez, H. S.; Levin, S. A.

    2016-12-01

    The vertical component of seismic wave reflections is contaminated by surface noise such as ground roll and secondary scattering from near surface inhomogeneities. A common method for attenuating these, unfortunately often aliased, arrivals is via velocity filtering and/or multichannel stacking. 3D-3C acquisition technology provides two additional sources of information about the surface wave noise that we exploit here: (1) areal receiver coverage, and (2) a pair of horizontal components recorded at the same location as the vertical component. Areal coverage allows us to segregate arrivals at each individual receiver or group of receivers by direction. The horizontal components, having much less compressional reflection body wave energy than the vertical component, provide a template of where to focus our energies on attenuating the surface wave arrivals. (In the simplest setting, the vertical component is a scaled 90 degree phase rotated version of the radial horizontal arrival, a potential third possible lever we have not yet tried to integrate.) The key to our approach is to use the magnitude of the horizontal components to outline a data-adaptive "velocity" filter region in the w-Kx-Ky domain. The big advantage for us is that even in the presence of uneven receiver geometries, the filter automatically tracks through aliasing without manual sculpting and a priori velocity and dispersion estimation. The method was applied to an aliased synthetic dataset based on a five layer earth model which also included shallow scatterers to simulate near-surface inhomogeneities and successfully removed both the ground roll and scatterers from the vertical component (Figure 1).

  3. On the wave number 2 eastward propagating quasi 2 day wave at middle and high latitudes

    NASA Astrophysics Data System (ADS)

    Gu, Sheng-Yang; Liu, Han-Li; Pedatella, N. M.; Dou, Xiankang; Liu, Yu

    2017-04-01

    The temperature and wind data sets from the ensemble data assimilation version of the Whole Atmosphere Community Climate Model + Data Assimilation Research Testbed (WACCM + DART) developed at the National Center for Atmospheric Research (NCAR) are utilized to study the seasonal variability of the eastward quasi 2 day wave (QTDW) with zonal wave number 2 (E2) during 2007. The aliasing ratio of E2 from wave number 3 (W3) in the synoptic WACCM data set is a constant value of 4 × 10-6% due to its uniform sampling pattern, whereas the aliasing is latitudinally dependent if the WACCM fields are sampled asynoptically based on the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) sampling. The aliasing ratio based on SABER sampling is 75% at 40°S during late January, where and when W3 peaks. The analysis of the synoptic WACCM data set shows that the E2 is in fact a winter phenomenon, which peaks in the stratosphere and lower mesosphere at high latitudes. In the austral winter period, the amplitudes of E2 can reach 10 K, 20 m/s, and 30 m/s for temperature, zonal, and meridional winds, respectively. In the boreal winter period, the wave perturbations are only one third as strong as those in austral winter. Diagnostic analysis also shows that the mean flow instabilities in the winter upper mesosphere polar region provide sources for the amplification of E2. This is different from the westward QTDWs, whose amplifications are related to the summer easterly jet. In addition, the E2 also peaks at lower altitude than the westward modes.

  4. Diffusion in realistic biophysical systems can lead to aliasing effects in diffusion spectrum imaging.

    PubMed

    Lacerda, Luis M; Sperl, Jonathan I; Menzel, Marion I; Sprenger, Tim; Barker, Gareth J; Dell'Acqua, Flavio

    2016-12-01

    Diffusion spectrum imaging (DSI) is an imaging technique that has been successfully applied to resolve white matter crossings in the human brain. However, its accuracy in complex microstructure environments has not been well characterized. Here we have simulated different tissue configurations, sampling schemes, and processing steps to evaluate DSI performances' under realistic biophysical conditions. A novel approach to compute the orientation distribution function (ODF) has also been developed to include biophysical constraints, namely integration ranges compatible with axial fiber diffusivities. Performed simulations identified several DSI configurations that consistently show aliasing artifacts caused by fast diffusion components for both isotropic diffusion and fiber configurations. The proposed method for ODF computation showed some improvement in reducing such artifacts and improving the ability to resolve crossings, while keeping the quantitative nature of the ODF. In this study, we identified an important limitation of current DSI implementations, specifically the presence of aliasing due to fast diffusion components like those from pathological tissues, which are not well characterized, and can lead to artifactual fiber reconstructions. To minimize this issue, a new way of computing the ODF was introduced, which removes most of these artifacts and offers improved angular resolution. Magn Reson Med 76:1837-1847, 2016. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  5. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  6. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  7. Sediment movement along the U.S. east coast continental shelf-I. Estimates of bottom stress using the Grant-Madsen model and near-bottom wave and current measurements

    USGS Publications Warehouse

    Lyne, V.D.; Butman, B.; Grant, W.D.

    1990-01-01

    Bottom stress is calculated for several long-term time-series observations, made on the U.S. east coast continental shelf during winter, using the wave-current interaction and moveable bed models of Grant and Madsen (1979, Journal of Geophysical Research, 84, 1797-1808; 1982, Journal of Geophysical Research, 87, 469-482). The wave and current measurements were obtained by means of a bottom tripod system which measured current using a Savonius rotor and vane and waves by means of a pressure sensor. The variables were burst sampled about 10% of the time. Wave energy was reasonably resolved, although aliased by wave groupiness, and wave period was accurate to 1-2 s during large storms. Errors in current speed and direction depend on the speed of the mean current relative to the wave current. In general, errors in bottom stress caused by uncertainties in measured current speed and wave characteristics were 10-20%. During storms, the bottom stress calculated using the Grant-Madsen models exceeded stress computed from conventional drag laws by a factor of about 1.5 on average and 3 or more during storm peaks. Thus, even in water as deep as 80 m, oscillatory near-bottom currents associated with surface gravity waves of period 12 s or longer will contribute substantially to bottom stress. Given that the Grant-Madsen model is correct, parameterizations of bottom stress that do not incorporate wave effects will substantially underestimate stress and sediment transport in this region of the continental shelf.

  8. CTER-rapid estimation of CTF parameters with error assessment.

    PubMed

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Multiple Hypothesis Correlation for Space Situational Awareness

    DTIC Science & Technology

    2011-08-29

    formulations with anti-aliasing through hybrid approaches such as the Drizzle algorithm [43] all the way up through to image superresolution techniques. Most... superresolution techniques. Second, given a set of images, either directly from the sensor or preprocessed using the above techniques, we showed how

  10. 78 FR 52553 - Privacy Act of 1974; Department of Homeland Security/ALL-035 Common Entity Index Prototype System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ... data elements: Full Name; Alias(es); Gender; Date of Birth; Country of Birth; Country of Citizenship... locked drawer behind a locked door. The records may be stored on magnetic disc, tape, or digital media...

  11. Tri-linear color multi-linescan sensor with 200 kHz line rate

    NASA Astrophysics Data System (ADS)

    Schrey, Olaf; Brockherde, Werner; Nitta, Christian; Bechen, Benjamin; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.

    2016-11-01

    In this paper we present a newly developed linear CMOS high-speed line-scanning sensor realized in a 0.35 μm CMOS OPTO process for line-scan with 200 kHz true RGB and 600 kHz monochrome line rate, respectively. In total, 60 lines are integrated in the sensor allowing for electronic position adjustment. The lines are read out in rolling shutter manner. The high readout speed is achieved by a column-wise organization of the readout chain. At full speed, the sensor provides RGB color images with a spatial resolution down to 50 μm. This feature enables a variety of applications like quality assurance in print inspection, real-time surveillance of railroad tracks, in-line monitoring in flat panel fabrication lines and many more. The sensor has a fill-factor close to 100%, preventing aliasing and color artefacts. Hence the tri-linear technology is robust against aliasing ensuring better inspection quality and thus less waste in production lines.

  12. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  13. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  14. Turbulent Channel Flow Measurements with a Nano-scale Thermal Anemometry Probe

    NASA Astrophysics Data System (ADS)

    Bailey, Sean; Witte, Brandon

    2014-11-01

    Using a Nano-scale Thermal Anemometry Probe (NSTAP), streamwise velocity was measured in a turbulent channel flow wind tunnel at Reynolds numbers ranging from Reτ = 500 to Reτ = 4000 . Use of these probes results in the a sensing-length-to-viscous-length-scale ratio of just 5 at the highest Reynolds number measured. Thus measured results can be considered free of spatial filtering effects. Point statistics are compared to recently published DNS and LDV data at similar Reynolds numbers and the results are found to be in good agreement. However, comparison of the measured spectra provide further evidence of aliasing at long wavelengths due to application of Taylor's frozen flow hypothesis, with increased aliasing evident with increasing Reynolds numbers. In addition to conventional point statistics, the dissipative scales of turbulence are investigated with focus on the wall-dependent scaling. Results support the existence of a universal pdf distribution of these scales once scaled to account for large-scale anisotropy. This research is supported by KSEF Award KSEF-2685-RDE-015.

  15. Potential and Pitfalls of High-Rate GPS

    NASA Astrophysics Data System (ADS)

    Smalley, R.

    2008-12-01

    With completion of the Plate Boundary Observatory (PBO), we are poised to capture a dense sampling of strong motion displacement time series from significant earthquakes in western North America with High-Rate GPS (HRGPS) data collected at 1 and 5 Hz. These data will provide displacement time series at potentially zero epicentral distance that, if valid, have great potential to contribute to understanding earthquake rupture processes. The caveat relates to whether or not the data are aliased: is the sampling rate fast enough to accurately capture the displacement's temporal history? Using strong motion recordings in the immediate epicentral area of several 6.77.5 events, which can be reasonably expected in the PBO footprint, even the 5 Hz data may be aliased. Some sort of anti-alias processing, currently not applied, will therefore necessary at the closest stations to guarantee the veracity of the displacement time series. We discuss several solutions based on a-priori knowledge of the expected ground motion and practicality of implementation.

  16. Sampling theory for asynoptic satellite observations. I Space-time spectra, resolution, and aliasing. II - Fast Fourier synoptic mapping

    NASA Technical Reports Server (NTRS)

    Salby, M. L.

    1982-01-01

    An evaluation of the information content of asynoptic data taken in the form of nadir sonde and limb scan observations is presented, and a one-to-one correspondence is established between the alias-free data and twice-daily synoptic maps. Attention is given to space and time limitations of sampling and the orbital geometry is discussed. The sampling pattern is demonstrated to determine unique space-time spectra at all wavenumbers and frequencies. Spectral resolution and aliasing are explored, while restrictions on sampling and information content are defined. It is noted that irregular sampling at high latitudes produces spurious contamination effects. An Asynoptic Sampling Theorem is thereby formulated, as is a Synoptic Retrieval Theorem, in the second part of the article. In the latter, a procedure is developed for retrieving the unique correspondence between the asymptotic data and the synoptic maps. Applications examples are provided using data from the Nimbus-6 satellite.

  17. USING LEAKED POWER TO MEASURE INTRINSIC AGN POWER SPECTRA OF RED-NOISE TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, S. F.; Xue, Y. Q., E-mail: zshifu@mail.ustc.edu.cn, E-mail: xuey@ustc.edu.cn

    Fluxes emitted at different wavebands from active galactic nuclei (AGNs) fluctuate at both long and short timescales. The variation can typically be characterized by a broadband power spectrum, which exhibits a red-noise process at high frequencies. The standard method of estimating the power spectral density (PSD) of AGN variability is easily affected by systematic biases such as red-noise leakage and aliasing, in particular when the observation spans a relatively short period and is gapped. Focusing on the high-frequency PSD that is strongly distorted due to red-noise leakage and usually not significantly affected by aliasing, we develop a novel and observablemore » normalized leakage spectrum (NLS), which sensitively describes the effects of leaked red-noise power on the PSD at different temporal frequencies. Using Monte Carlo simulations, we demonstrate how an AGN underlying PSD sensitively determines the NLS when there is severe red-noise leakage, and thereby how the NLS can be used to effectively constrain the underlying PSD.« less

  18. Analytical three-point Dixon method: With applications for spiral water-fat imaging.

    PubMed

    Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G

    2016-02-01

    The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.

  19. New MHD feedback control schemes using the MARTe framework in RFX-mod

    NASA Astrophysics Data System (ADS)

    Piron, Chiara; Manduchi, Gabriele; Marrelli, Lionello; Piovesan, Paolo; Zanca, Paolo

    2013-10-01

    Real-time feedback control of MHD instabilities is a topic of major interest in magnetic thermonuclear fusion, since it allows to optimize a device performance even beyond its stability bounds. The stability properties of different magnetic configurations are important test benches for real-time control systems. RFX-mod, a Reversed Field Pinch experiment that can also operate as a tokamak, is a well suited device to investigate this topic. It is equipped with a sophisticated magnetic feedback system that controls MHD instabilities and error fields by means of 192 active coils and a corresponding grid of sensors. In addition, the RFX-mod control system has recently gained new potentialities thanks to the introduction of the MARTe framework and of a new CPU architecture. These capabilities allow to study new feedback algorithms relevant to both RFP and tokamak operation and to contribute to the debate on the optimal feedback strategy. This work focuses on the design of new feedback schemes. For this purpose new magnetic sensors have been explored, together with new algorithms that refine the de-aliasing computation of the radial sideband harmonics. The comparison of different sensor and feedback strategy performance is described in both RFP and tokamak experiments.

  20. Contour-Based Corner Detection and Classification by Using Mean Projection Transform

    PubMed Central

    Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein

    2014-01-01

    Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354

  1. Contour-based corner detection and classification by using mean projection transform.

    PubMed

    Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein

    2014-02-28

    Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.

  2. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.

  3. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    PubMed Central

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-01-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Conclusions Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode. PMID:26352168

  4. Generalized Aliasing as a Basis for Program Analysis Tools

    DTIC Science & Technology

    2000-11-01

    5 W 5 X LV DQ HGJH LQ*7KHQWKHJUDSKLVSDUWLWLRQHGLQWRVWURQJO\\FRQQHFWHGFRPSRQHQWVFDOOHG FOXVWHUOHYHOV7KLVSDUWLWLRQLVZULWWHQ6...IRUPWEF XLVGLVSOD\\HGDVDVROLG HGJH IURPW¶VQRGHWRX¶VQRGHODEHOOHGZLWKEF$ FRQVWUDLQWRIWKHIRUPW )L XLVGLVSOD\\HGDVDGRWWHG HGJH ...0 1 )LVLQWKH935^ )RUHDFKQRGH1LQ*^ ,I0 1 )!0 1 LVLQWKH935^ ,IWKHUHLVQR HGJH IURP1

  5. Aliasing of the Schumann resonance background signal by sprite-associated Q-bursts

    NASA Astrophysics Data System (ADS)

    Guha, Anirban; Williams, Earle; Boldi, Robert; Sátori, Gabriella; Nagy, Tamás; Bór, József; Montanyà, Joan; Ortega, Pascal

    2017-12-01

    The Earth's naturally occurring Schumann resonances (SR) are composed of a quasi-continuous background component and a larger-amplitude, short-duration transient component, otherwise called 'Q-burst' (Ogawa et al., 1967). Sprites in the mesosphere are also known to accompany the energetic positive ground flashes that launch the Q-bursts (Boccippio et al., 1995). Spectra of the background Schumann Resonances (SR) require a natural stabilization period of ∼10-12 min for the three conspicuous modal parameters to be derived from Lorentzian fitting. Before the spectra are computed and the fitting process is initiated, the raw time series data need to be properly filtered for local cultural noise, narrow band interference as well as for large transients in the form of global Q-bursts. Mushtak and Williams (2009) describe an effective technique called Isolated Lorentzian (I-LOR), in which, the contributions from local cultural and various other noises are minimized to a great extent. An automated technique based on median filtering of time series data has been developed. These special lightning flashes are known to have greater contribution in the ELF range (below 1 kHz) compared to general negative CG strikes (Huang et al., 1999; Cummer et al., 2006). The global distributions of these Q-bursts have been studied by Huang et al. (1999) Rhode Island, USA by wave impedance methods from single station ELF measurements at Rhode Island, USA and from Japan Hobara et al. (2006). The present work aims to demonstrate the effect of Q-bursts on SR background spectra using GPS time-stamped observation of TLEs. It is observed that the Q-bursts selected for the present work do alias the background spectra over a 5-s period, though the amplitudes of these Q-bursts are far below the background threshold of 16 Core Standard Deviation (CSD) so that they do not strongly alias the background spectra of 10-12 min duration. The examination of one exceptional Q-burst shows that appreciable spectral aliasing can occur even when 12-min spectral integrations are considered. The statistical result shows that for a 12-min spectrum, events above 16 CSD are capable of producing significant frequency aliasing of the modal frequencies, although the intensity aliasing might have a negligible effect unless the events are exceptionally large (∼200 CSD). The spectral CSD methodology may be used to extract the time of arrival of the Q-burst transients. This methodology may be combined with a hyperbolic ranging, thus becoming an effective tool to detect TLEs globally with a modest number of networked observational stations.

  6. Systematic effects of foreground removal in 21-cm surveys of reionization

    NASA Astrophysics Data System (ADS)

    Petrovic, Nada; Oh, S. Peng

    2011-05-01

    21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.

  7. Wave-CAIPI ViSTa: highly accelerated whole-brain direct myelin water imaging with zero-padding reconstruction.

    PubMed

    Wu, Zhe; Bilgic, Berkin; He, Hongjian; Tong, Qiqi; Sun, Yi; Du, Yiping; Setsompop, Kawin; Zhong, Jianhui

    2018-09-01

    This study introduces a highly accelerated whole-brain direct visualization of short transverse relaxation time component (ViSTa) imaging using a wave controlled aliasing in parallel imaging (CAIPI) technique, for acquisition within a clinically acceptable scan time, with the preservation of high image quality and sufficient spatial resolution, and reduced residual point spread function artifacts. Double inversion RF pulses were applied to preserve the signal from short T 1 components for directly extracting myelin water signal in ViSTa imaging. A 2D simultaneous multislice and a 3D acquisition of ViSTa images incorporating wave-encoding were used for data acquisition. Improvements brought by a zero-padding method in wave-CAIPI reconstruction were also investigated. The zero-padding method in wave-CAIPI reconstruction reduced the root-mean-square errors between the wave-encoded and Cartesian gradient echoes for all wave gradient configurations in simulation, and reduced the side-main lobe intensity ratio from 34.5 to 16% in the thin-slab in vivo ViSTa images. In a 4 × acceleration simultaneous-multislice scenario, wave-CAIPI ViSTa achieved negligible g-factors (g mean /g max  = 1.03/1.10), while retaining minimal interslice artifacts. An 8 × accelerated acquisition of 3D wave-CAIPI ViSTa imaging covering the whole brain with 1.1 × 1.1 × 3 mm 3 voxel size was achieved within 15 minutes, and only incurred a small g-factor penalty (g mean /g max  = 1.05/1.16). Whole-brain ViSTa images were obtained within 15 minutes with negligible g-factor penalty by using wave-CAIPI acquisition and zero-padding reconstruction. The proposed zero-padding method was shown to be effective in reducing residual point spread function for wave-encoded images, particularly for ViSTa. © 2018 International Society for Magnetic Resonance in Medicine.

  8. 76 FR 34720 - Chemical Facility Anti-Terrorism Standards Personnel Surety Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ...; Date of birth; Place of birth; Gender; Citizenship; Passport information; Visa information; Alien... birth; and c. Citizenship or Gender. The Department will require that high-risk chemical facilities.... Aliases; b. Gender (for Non-U.S. persons); c. Place of birth; and d. DHS Redress Number. In lieu of...

  9. 77 FR 28250 - Entity List Additions; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... person who was added under the destination of Pakistan to clarify the text is the address of this person... follows: Pakistan (1) Jalaluddin Haqqani, a.k.a., the following seven aliases: --General Jalaluddin... Jalaluddin. --Miram Shah, Pakistan. United Arab Emirates (1) Al Maskah Used Car and Spare Parts, Maliha Road...

  10. Android REST Client Application to View, Collect, and Exploit Video and Image Data

    DTIC Science & Technology

    2013-09-01

    Superresolution Image Reconstruction From a Sequence of Aliased Imagery. Applied Optics 2006, 45 (21), 5073–5085. 3, Driggers, R. G.; Krapels, K. A...Murrill, S.; Young, S. S.; Theilke, M.; Schuler, J. M. Superresolution Performance for Undersampled Imagers. Optical Engineering 2005, 44 (01). 4. Young

  11. 75 FR 62173 - In the Matter of the Review of the Designation of Jemaah Islamiya (JI and Other Aliases) as a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... maintained. This determination shall be published in the Federal Register. Dated: September 28, 2010. Hillary Rodham Clinton, Secretary of State. [FR Doc. 2010-25333 Filed 10-6-10; 8:45 am] BILLING CODE 4710-10-P ...

  12. Enhancing National Security by Strengthening the Legal Immigration System

    DTIC Science & Technology

    2009-12-01

    Ramzi Yousef traveled from Pakistan to New York’s John F. Kennedy ( JFK ) airport using aliases. Both men possessed a variety of documents, including...both Yousef and another conspirator, Eyad Ismoil, to JFK airport . Yousef used a false passport to escape to Pakistan, and Ismoil fled to Jordan

  13. A Multi-Satellite GRACE-like Mission Using Small Satellites

    NASA Astrophysics Data System (ADS)

    Stephens, M.; Bender, P. L.; Nerem, R.; Pierce, R.; Wiese, D. N.

    2010-12-01

    Measurement of global water variation provides information critical to climate change and water resource monitoring. The Gravity Recovery and Climate Experiment II (GRACE II) was chosen as a Tier III mission by National Research Council's decadal survey because of its unique ability to measure the global mass distributions and variations in the mass distribution caused primarily by water variation. We discuss a multi-satellite approach to a GRACE-like mission. Enhanced spatial resolution of mass variations over those provided by the current GRACE mission can be achieved by improving the ranging accuracy; an interferometric ranging concept that improves the ranging accuracy has been demonstrated[1]. However, recent calculations show that to obtain the full science improvement using interferometric ranging, temporal aliasing errors due to modeling and to undersampling of geophysical signals must be mitigated[2]. One approach is to improve the data analysis techniques and validation processes. Another approach is to fly two or more pairs of satellites, thereby sampling the Earth's gravitational field at shorter time intervals[3]. A multiple-pair mission is often dismissed as too expensive, but the mission costs of a multiple-pair GRACE-like mission could be greatly reduced by developing compact ranging systems so that the mass, power, and volume usage is consistent with small spacecraft buses. Such size reduction drastically reduces the launch costs by allowing the spacecraft to be launched as auxiliary payloads. We will discuss the technological challenges that are associated with a GRACE-like mission that uses smallsats to reduce costs of more than one pair of satellites, as well as the scientific benefits of the two or more satellite pairs. The technological challenges include reducing the size of the payload and developing a low-drag, low-pointing jitter spacecraft. [1]Pierce, R., J. Leitch, M. Stephens, P. Bender, and R. Nerem, “Intersatellite range monitoring using optical interferometry”, Appl. Opt. 47 (2008), 5007. [2]P. Visser and E. Pavlis in ”Report from the Workshop on The Future of Satellite Gravimetry”, edited by R. Koop and R. Rummel (ESTEC, Noordwijk, The Netherlands, 12-13 April, 2007), pg. 11. [3]Bender, P. L., D. N. Wiese, and R. S. Nerem, “A possible dual-GRACE mission with 90 degree and 63 degree inclination orbits, Proceedings of the 3rd International Symposium on Formation Flying, Missions and Technologies”, ESA Communication Production Office, ESA-SP-654, 2008.

  14. 77 FR 58006 - Addition of Certain Persons to the Entity List; Removal of Person From the Entity List Based on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ...; (5) Chinese Academy of Engineering Physics, a.k.a., the following seventeen aliases: --Ninth Academy...; --Southwest Institute of Explosives and Chemical Engineering; --Southwest Institute of Fluid Physics...; --Southwest Institute of Materials; --Southwest Institute of Nuclear Physics and Chemistry (a.k.a., China...

  15. 75 FR 9238 - Privacy Act of 1974; Department of Homeland Security United States Immigration Customs and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-01

    ... place of birth; passport and other travel document information; nationality; aliases; Alien Registration... date and time of a successful collection and confirmation from the FBI that the sample was able to be... alleged violations of criminal or immigration law (location, date, time, event category, types of criminal...

  16. Aliasing, Ambiguities, and Interpolation in Wideband Direction-of-Arrival Estimation Using Antenna Arrays

    ERIC Educational Resources Information Center

    Ho, Chung-Cheng

    2016-01-01

    For decades, direction finding has been an important research topic in many applications such as radar, location services, and medical diagnosis for treatment. For those kinds of applications, the precision of location estimation plays an important role, since that, having a higher precision location estimate method is always desirable. Although…

  17. Regional Characteristics for Interpreting Inverted Echo Sounder (IES) observations

    DTIC Science & Technology

    1987-06-01

    rounding the IESs. There are seasonal warming and and ideally, we should like to have a series of hydro- cooling effects which may be missed with...thermocline This shallo, sanabihlit\\ , Is lkck to be spatialk and temporall , aliased: it ma\\ 01 ." b assoi ated ws.ith internal \\ awes or frontal tluctua

  18. Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness

    NASA Astrophysics Data System (ADS)

    Hardy, Tyler J.; Cain, Stephen C.

    2016-05-01

    The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.

  19. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  20. Ground roll attenuation by synchrosqueezed curvelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Chen, Yangkang; Ma, Jianwei

    2018-04-01

    Ground roll is a type of coherent noise in land seismic data that has low frequency, low velocity and high amplitude. It damages reflection events that contain important information about subsurface structures, hence the removal of ground roll is a crucial step in seismic data processing. A suitable transform is needed for removal of ground roll. Curvelet transform is an effective sparse transform that optimally represents seismic events. In addition, the curvelets can provide a multiscale and multidirectional decomposition of the input data in time-frequency and angular domain, which can help distinguish between ground roll and useful signals. In this paper, we apply synchrosqueezed curvelet transform (SSCT) for ground roll attenuation. The synchrosqueezing technique in SSCT is used to precisely reallocate the energy of local wave vectors in order to separate ground roll from the original data with higher resolution and higher fidelity. Examples of synthetic and field seismic data reveal that SSCT performs well in the suppression of aliased and non-aliased ground roll while preserving reflection waves, in comparison with high-pass filtering, wavelet and curvelet methods.

  1. The Earth Gravitational Observatory (EGO): Nanosat Constellations For Advanced Gravity Mapping

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Saltman, A.; Bettadpur, S. V.; Nerem, R. S.; Abel, J.

    2017-12-01

    The trend to nanosats for space-based remote sensing is transforming system architectures: fleets of "cellular" craft scanning Earth with exceptional precision and economy. GeoOptics Inc has been selected by NASA to develop a vision for that transition with an initial focus on advanced gravity field mapping. Building on our spaceborne GNSS technology we introduce innovations that will improve gravity mapping roughly tenfold over previous missions at a fraction of the cost. The power of EGO is realized in its N-satellite form where all satellites in a cluster receive dual-frequency crosslinks from all other satellites, yielding N(N-1)/2 independent measurements. Twelve "cells" thus yield 66 independent links. Because the cells form a 2D arc with spacings ranging from 200 km to 3,000 km, EGO senses a wider range of gravity wavelengths and offers greater geometrical observing strength. The benefits are two-fold: Improved time resolution enables observation of sub-seasonal processes, as from hydro-meteorological phenomena; improved measurement quality enhances all gravity solutions. For the GRACE mission, key limitations arise from such spacecraft factors as long-term accelerometer error, attitude knowledge and thermal stability, which are largely independent from cell to cell. Data from a dozen cells reduces their impact by 3x, by the "root-n" averaging effect. Multi-cell closures improve on this further. The many closure paths among 12 cells provide strong constraints to correct for observed range changes not compatible with a gravity source, including accelerometer errors in measuring non-conservative forces. Perhaps more significantly from a science standpoint, system-level estimates with data from diverse orbits can attack the many scientifically limiting sources of temporal aliasing.

  2. Simultaneous Multislice Echo Planar Imaging With Blipped Controlled Aliasing in Parallel Imaging Results in Higher Acceleration: A Promising Technique for Accelerated Diffusion Tensor Imaging of Skeletal Muscle.

    PubMed

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-07-01

    The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic resonance scanner using a 15-channel knee coil. The EPI was performed at a b value of 500 s/mm2 without slice acceleration (conventional DTI) as well as with 2-fold and 3-fold acceleration. Fractional anisotropy (FA) and mean diffusivity (MD) were measured in all 3 acquisitions. Fiber tracking performance was compared between the acquisitions regarding the number of tracks, average track length, and anatomical precision using multivariate analysis of variance and Mann-Whitney U tests. Acquisition time was 7:24 minutes for conventional DTI, 3:53 minutes for 2-fold acceleration, and 2:38 minutes for 3-fold acceleration. Overall FA and MD values ranged from 0.220 to 0.378 and 1.595 to 1.829 mm2/s, respectively. Two-fold acceleration yielded similar FA and MD values (P ≥ 0.901) and similar fiber tracking performance compared with conventional DTI. Three-fold acceleration resulted in comparable MD (P = 0.199) but higher FA values (P = 0.006) and significantly impaired fiber tracking in the soleus and tibialis anterior muscles (number of tracks, P < 0.001; anatomical precision, P ≤ 0.005). Simultaneous multislice EPI with blipped controlled aliasing in parallel imaging results in higher acceleration can remarkably reduce acquisition time in DTI of skeletal muscle with similar image quality and quantification accuracy of diffusion parameters. This may increase the clinical applicability of muscle anisotropy measurements.

  3. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  4. Stochastic downscaling of numerically simulated spatial rain and cloud fields using a transient multifractal approach

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.; Miranda, P. M.

    2012-04-01

    Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior depending on large scale parameters such as terrain forcing and mean atmospheric conditions at each location, particularly mean wind speed and moist stability. A particularly robust behavior found is the transition of the multiscaling parameters between stable and unstable cases, which has a clear physical correspondence to the transition from stratiform to organized (banded) convective regime. Thus multifractal diagnostics of moist processes are fundamentally transient and should provide a physically robust basis for the downscaling and sub-grid scale parameterizations of moist processes. Here, we investigate the possibility of using a simplified computationally efficient multifractal downscaling methodology based on turbulent cascades to produce statistically consistent fields at scales higher than the ones resolved by the model. Specifically, we are interested in producing rainfall and cloud fields at spatial resolutions necessary for effective flash flood and earth flows forecasting. The results are examined by comparing downscaled field against observations, and tendency error budgets are used to diagnose the evolution of transient errors in the numerical model prediction which can be attributed to aliasing.

  5. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  6. Interactions between Brief Flashed Lines at Threshold.

    DTIC Science & Technology

    1987-12-11

    ORAIAIN 6 OFC ’PO 4 4M FMNTRIGOGNZTO lol in AFI, C 203 2- 44 . NAME OF PFN IG PORAIION lbOFFICE SYMBOL 7il PRO4MEN MINTRUNT INCNIATON NM ,.. .oAFOSR...Cass, P. C. (1986) Facilitatory interactionE between flashed lines. Perceptinn. jj,443-460. omith, P.A. and Cass, P C. (1967) Aliasing in the

  7. Abandoned Uranium Mine (AUM) Surface Areas, Navajo Nation, 2016, US EPA Region 9

    EPA Pesticide Factsheets

    This GIS dataset contains polygon features that represent all Abandoned Uranium Mines (AUMs) on or within one mile of the Navajo Nation. Attributes include mine names, aliases, Potentially Responsible Parties, reclaimation status, EPA mine status, links to AUM reports, and the region in which an AUM is located. This dataset contains 608 features.

  8. 77 FR 44307 - In the Matter of the Review of the Designation of the Islamic Resistance Movement (Hamas and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-27

    ... DEPARTMENT OF STATE [Public Notice 7965] In the Matter of the Review of the Designation of the Islamic Resistance Movement (Hamas and Other Aliases) As a Foreign Terrorist Organization pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative...

  9. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  10. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  11. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  12. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  13. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  14. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  15. Simulation of an automatically-controlled STOL aircraft in a microwave landing system multipath environment

    NASA Technical Reports Server (NTRS)

    Toda, M.; Brown, S. C.; Burrous, C. N.

    1976-01-01

    The simulated response is described of a STOL aircraft to Microwave Landing System (MLS) multipath errors during final approach and touchdown. The MLS azimuth, elevation, and DME multipath errors were computed for a relatively severe multipath environment at Crissy Field California, utilizing an MLS multipath simulation at MIT Lincoln Laboratory. A NASA/Ames six-degree-of-freedom simulation of an automatically-controlled deHavilland C-8A STOL aircraft was used to determine the response to these errors. The results show that the aircraft response to all of the Crissy Field MLS multipath errors was small. The small MLS azimuth and elevation multipath errors did not result in any discernible aircraft motion, and the aircraft response to the relatively large (200-ft (61-m) peak) DME multipath was noticeable but small.

  16. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  17. High-resolution observations of the globular cluster NGC 7099

    NASA Astrophysics Data System (ADS)

    Sams, Bruce Jones, III

    The globular cluster NGC 7099 is a prototypical collapsed core cluster. Through a series of instrumental, observational, and theoretical observations, I have resolved its core structure using a ground based telescope. The core has a radius of 2.15 arcsec when imaged with a V band spatial resolution of 0.35 arcsec. Initial attempts at speckle imaging produced images of inadequate signal to noise and resolution. To explain these results, a new, fully general signal-to-noise model has been developed. It properly accounts for all sources of noise in a speckle observation, including aliasing of high spatial frequencies by inadequate sampling of the image plane. The model, called Full Speckle Noise (FSN), can be used to predict the outcome of any speckle imaging experiment. A new high resolution imaging technique called ACT (Atmospheric Correlation with a Template) was developed to create sharper astronomical images. ACT compensates for image motion due to atmospheric turbulence. ACT is similar to the Shift and Add algorithm, but uses apriori spatial knowledge about the image to further constrain the shifts. In this instance, the final images of NGC 7099 have resolutions of 0.35 arcsec from data taken in 1 arcsec seeing. The PAPA (Precision Analog Photon Address) camera was used to record data. It is subject to errors when imaging cluster cores in a large field of view. The origin of these errors is explained, and several ways to avoid them proposed. New software was created for the PAPA camera to properly take flat field images taken in a large field of view. Absolute photometry measurements of NGC 7099 made with the PAPA camera are accurate to 0.1 magnitude. Luminosity sampling errors dominate surface brightness profiles of the central few arcsec in a collapsed core cluster. These errors set limits on the ultimate spatial accuracy of surface brightness profiles.

  18. Impact of spot charge inaccuracies in IMPT treatments.

    PubMed

    Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M

    2017-08-01

    Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.

  19. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  20. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  1. Acquiring Research-grade ALSM Data in the Commercial Marketplace

    NASA Astrophysics Data System (ADS)

    Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.

    2003-12-01

    The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.

  2. Wide-Angle Multistatic Synthetic Aperture Radar: Focused Image Formation and Aliasing Artifact Mitigation

    DTIC Science & Technology

    2005-07-01

    Progress in Applied Computational Electro- magnetics. ACES, Syracuse, NY, 2004. 91. Mahafza, Bassem R. Radar Systems Analysis and Design Using MATLAB...Figure Page 4.5. RCS chamber coordinate system . . . . . . . . . . . . . . . . . 88 4.6. AFIT’s RCS Chamber...4.11. Frequency domain schematic of RCS data collection . . . . . . 98 4.12. Spherical coordinate system for RCS data calibration . . . . . . 102 4.13

  3. 75 FR 74127 - In the Matter of the Review of the Designation of Islamic Movement of Uzbekistan (IMU and Other...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ... DEPARTMENT OF STATE [Public Notice: 7250] In the Matter of the Review of the Designation of Islamic Movement of Uzbekistan (IMU and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative...

  4. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  5. Spatial acoustic signal processing for immersive communication

    NASA Astrophysics Data System (ADS)

    Atkins, Joshua

    Computing is rapidly becoming ubiquitous as users expect devices that can augment and interact naturally with the world around them. In these systems it is necessary to have an acoustic front-end that is able to capture and reproduce natural human communication. Whether the end point is a speech recognizer or another human listener, the reduction of noise, reverberation, and acoustic echoes are all necessary and complex challenges. The focus of this dissertation is to provide a general method for approaching these problems using spherical microphone and loudspeaker arrays.. In this work, a theory of capturing and reproducing three-dimensional acoustic fields is introduced from a signal processing perspective. In particular, the decomposition of the spatial part of the acoustic field into an orthogonal basis of spherical harmonics provides not only a general framework for analysis, but also many processing advantages. The spatial sampling error limits the upper frequency range with which a sound field can be accurately captured or reproduced. In broadband arrays, the cost and complexity of using multiple transducers is an issue. This work provides a flexible optimization method for determining the location of array elements to minimize the spatial aliasing error. The low frequency array processing ability is also limited by the SNR, mismatch, and placement error of transducers. To address this, a robust processing method is introduced and used to design a reproduction system for rendering over arbitrary loudspeaker arrays or binaurally over headphones. In addition to the beamforming problem, the multichannel acoustic echo cancellation (MCAEC) issue is also addressed. A MCAEC must adaptively estimate and track the constantly changing loudspeaker-room-microphone response to remove the sound field presented over the loudspeakers from that captured by the microphones. In the multichannel case, the system is overdetermined and many adaptive schemes fail to converge to the true impulse response. This forces the need to track both the near and far end room responses. A transform domain method that mitigates this problem is derived and implemented. Results with a real system using a 16-channel loudspeaker array and 32-channel microphone array are presented.

  6. On removing interpolation and resampling artifacts in rigid image registration.

    PubMed

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  7. On Removing Interpolation and Resampling Artifacts in Rigid Image Registration

    PubMed Central

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce

    2013-01-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044

  8. Power cepstrum technique with application to model helicopter acoustic data

    NASA Technical Reports Server (NTRS)

    Martin, R. M.; Burley, C. L.

    1986-01-01

    The application of the power cepstrum to measured helicopter-rotor acoustic data is investigated. A previously applied correction to the reconstructed spectrum is shown to be incorrect. For an exact echoed signal, the amplitude of the cepstrum echo spike at the delay time is linearly related to the echo relative amplitude in the time domain. If the measured spectrum is not entirely from the source signal, the cepstrum will not yield the desired echo characteristics and a cepstral aliasing may occur because of the effective sample rate in the frequency domain. The spectral analysis bandwidth must be less than one-half the echo ripple frequency or cepstral aliasing can occur. The power cepstrum editing technique is a useful tool for removing some of the contamination because of acoustic reflections from measured rotor acoustic spectra. The cepstrum editing yields an improved estimate of the free field spectrum, but the correction process is limited by the lack of accurate knowledge of the echo transfer function. An alternate procedure, which does not require cepstral editing, is proposed which allows the complete correction of a contaminated spectrum through use of both the transfer function and delay time of the echo process.

  9. Evaluation of slice accelerations using multiband echo planar imaging at 3 Tesla

    PubMed Central

    Xu, Junqian; Moeller, Steen; Auerbach, Edward J.; Strupp, John; Smith, Stephen M.; Feinberg, David A.; Yacoub, Essa; Uğurbil, Kâmil

    2013-01-01

    We evaluate residual aliasing among simultaneously excited and acquired slices in slice accelerated multiband (MB) echo planar imaging (EPI). No in-plane accelerations were used in order to maximize and evaluate achievable slice acceleration factors at 3 Tesla. We propose a novel leakage (L-) factor to quantify the effects of signal leakage between simultaneously acquired slices. With a standard 32-channel receiver coil at 3 Tesla, we demonstrate that slice acceleration factors of up to eight (MB = 8) with blipped controlled aliasing in parallel imaging (CAIPI), in the absence of in-plane accelerations, can be used routinely with acceptable image quality and integrity for whole brain imaging. Spectral analyses of single-shot fMRI time series demonstrate that temporal fluctuations due to both neuronal and physiological sources were distinguishable and comparable up to slice-acceleration factors of nine (MB = 9). The increased temporal efficiency could be employed to achieve, within a given acquisition period, higher spatial resolution, increased fMRI statistical power, multiple TEs, faster sampling of temporal events in a resting state fMRI time series, increased sampling of q-space in diffusion imaging, or more quiet time during a scan. PMID:23899722

  10. Undersampled digital holographic interferometry

    NASA Astrophysics Data System (ADS)

    Halaq, H.; Demoli, N.; Sović, I.; Šariri, K.; Torzynski, M.; Vukičević, D.

    2008-04-01

    In digital holography, primary holographic fringes are recorded using a matricial CCD sensor. Because of the low spatial resolution of currently available CCD arrays, the angle between the reference and object beams must be limited to a few degrees. Namely, due to the digitization involved, the Shannon's criterion imposes that the Nyquist sampling frequency be at least twice the highest signal frequency. This means that, in the case of the recording of an interference fringe pattern by a CCD sensor, the inter-fringe distance must be larger than twice the pixel period. This in turn limits the angle between the object and the reference beams. If this angle, in a practical holographic interferometry measuring setup, cannot be limited to the required value, aliasing will occur in the reconstructed image. In this work, we demonstrate that the low spatial frequency metrology data could nevertheless be efficiently extracted by careful choice of twofold, and even threefold, undersampling of the object field. By combining the time-averaged recording with subtraction digital holography method, we present results for a loudspeaker membrane interferometric study obtained under strong aliasing conditions. High-contrast fringes, as a consequence of the vibration modes of the membrane, are obtained.

  11. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  13. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  14. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  15. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  16. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  17. Small Atomic Orbital Basis Set First‐Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources

    PubMed Central

    Sure, Rebecca; Brandenburg, Jan Gerit

    2015-01-01

    Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221

  18. Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom

    PubMed Central

    Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.

    2013-01-01

    Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503

  19. Reachable Sets for Multiple Asteroid Sample Return Missions

    DTIC Science & Technology

    2005-12-01

    reduce the number of feasible asteroid targets. Reachable sets are defined in a reduced classical orbital element space. The boundary of this...Reachable sets are defined in a reduced classical orbital element space. The boundary of this reduced space is obtained by extremizing a family of...aliasing problems. Other coordinate elements , such as equinoctial elements , can provide a set of singularity-free slowly changing variables, but

  20. 78 FR 18808 - Addition of Certain Persons to the Entity List; Removal of Person From the Entity List Based on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ...) David Khayam, Apt 1811 Manchester Tower, Dubai Marina, Dubai, U.A.E.; and PO Box 111831, Al Daghaya... Rashed, Apt 1811 Manchester Tower, Dubai Marina, Dubai, U.A.E.; and PO Box 111831, Al Daghaya, Dubai, U.A... following two aliases: --Baet Alhoreya Electronics Trading; and --Baet Alhoreya, Apt 1811 Manchester Tower...

  1. 75 FR 40019 - In the Matter of the Review of the Designation of the Communist Party of the Philippines/New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... DEPARTMENT OF STATE [Public Notice: 7086] In the Matter of the Review of the Designation of the Communist Party of the Philippines/New People's Army (aka CPP/NPA and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of...

  2. Artifacts Of Spectral Analysis Of Instrument Readings

    NASA Technical Reports Server (NTRS)

    Wise, James H.

    1995-01-01

    Report presents experimental and theoretical study of some of artifacts introduced by processing outputs of two nominally identical low-frequency-reading instruments; high-sensitivity servo-accelerometers mounted together and operating, in conjunction with signal-conditioning circuits, as seismometers. Processing involved analog-to-digital conversion with anti-aliasing filtering, followed by digital processing including frequency weighting and computation of different measures of power spectral density (PSD).

  3. Effects of spectrometer band pass, sampling, and signal-to-noise ratio on spectral identification using the Tetracorder algorithm

    USGS Publications Warehouse

    Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.

    2003-01-01

    Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.

  4. Assessment of noise in non-tectonic displacement derived from GRACE time-variable gravity filed

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Shen, Yunzhong

    2017-04-01

    Many studies have been focusing on estimating the noises in GNSS monitoring time series. While the noises of GNSS time series after the correction with non-tectonic displacement should be re-estimated. Knowing the noises in the non-tectonic can help to better identify the sources of re-estimated noises. However, there is a lack of knowledge of noises in the non-tectonic displacement. The objective of this work is to assess the noise in the non-tectonic displacement. GRACE time-variable gravity is used to reflect the global mass variation. The GRACE stokes coefficients of the gravity field are used to calculate the non-tectonic surface displacement at any point on the surface. The Atmosphere and Ocean AOD1B de-aliasing model to the GRACE solutions is added because the complete mass variation is requested. The monthly GRACE solutions from CSR, JPL, GFZ and Tongji span from January 2003 to September 2015 are compared. The degree-1 coefficients derived by Swenson et al (2008) are added and also the C20 terms are replaced with those obtained from Satellite Laser Ranging. The P4M6 decorrelation and Fan filter with a radius of 300 km are adopted to reduce the stripe errors. Optimal noise models for the 1054 stations in ITRF2014 are presented. It is found that white noise only take up a small proportion: less than 18% in horizontal and less than 13% in vertical. The dominant models in up and north components are ARMA and flicker, while in east the power law noise shows significance. The local distribution comparison of the optimal noise models among different products is quite similar, which shows that there is little dependence on the different strategies adopted. In addition, the reasons that caused to different distributions of the optimal noise models are also investigated. Meanwhile different filtering methods such as Gaussian filters, Han filters are applied to see whether the noise is related with filters. Keyword: optimal noise model; non-tectonic displacement;GRACE; local distribution; filters

  5. Hydraulic Tomography and the Curse of Storativity

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Li, W.; Englert, A.

    2006-12-01

    Pumping tests are among the most common techniques for hydrogeological site investigation. Their traditional analysis is based on fitting analytical expressions to measured time series of drawdown. These expressions were derived for homogeneous conditions, whereas all natural aquifers are heterogeneous. The mentioned conceptual inconsistency complicates the hydrogeological interpretation of the obtained coefficients. In particularly, it has been shown that the heterogeneity of transmissivity is aliased to variability in the estimated storativity. In hydraulic tomography, multiple pumping tests are jointly analyzed. The hydraulic parameters to be estimated are allowed to fluctuate in space. For regularization, a geostatistical smoothness criterion may be introduced. Thus, the inversion results in the most likely spatial distribution of parameters that is consistent with the drawdown measurements and follows a predefined geostatistical model. Applying the restricted maximum likelihood approach, the parameters of the prior covariance function (i.e., the prior variance and correlation length) can be inferred from the data as well. We have applied the quasi-linear geostatistical approach of inverse modeling to drawdown measurements of multiple, overlapping pumping tests performed at the test site Krauthausen near Jülich, Germany. To reduce the computational costs, we have characterized the drawdown curves by their temporal moments. In the estimation of the geostatistical parameters, the measurement error of heads turned out to be of vital importance. The less we trust the data, the larger is the estimated correlation length, resulting in a more uniform distribution of transmissivity. Similar to conventional pumping test analysis, the data analysis point to a high variability of storativity although the properties making up storativity are known to be only mildly heterogeneous. We conjecture that the unresolved small-scale spatial variability of conductivity is mapped to variability of storativity. This is rather unfortunate since reliable field data on the variability of storativity are missing. The study underscores that structural information is difficult to extract from hydraulic data alone. Information on length scales and major deterministic features may be gained by geophysical surveying, even if rock-laws directly relating geophysical to hydraulic properties are considered unreliable.

  6. Quantifying the Climate-Scale Accuracy of Satellite Cloud Retrievals

    NASA Astrophysics Data System (ADS)

    Roberts, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Liang, L.; Di Girolamo, L.

    2014-12-01

    Instrument calibration and cloud retrieval algorithms have been developed to minimize retrieval errors on small scales. However, measurement uncertainties and assumptions within retrieval algorithms at the pixel level may alias into decadal-scale trends of cloud properties. We first, therefore, quantify how instrument calibration changes could alias into cloud property trends. For a perfect observing system the climate trend accuracy is limited only by the natural variability of the climate variable. Alternatively, for an actual observing system, the climate trend accuracy is additionally limited by the measurement uncertainty. Drifts in calibration over time may therefore be disguised as a true climate trend. We impose absolute calibration changes to MODIS spectral reflectance used as input to the CERES Cloud Property Retrieval System (CPRS) and run the modified MODIS reflectance through the CPRS to determine the sensitivity of cloud properties to calibration changes. We then use these changes to determine the impact of instrument calibration changes on trend uncertainty in reflected solar cloud properties. Secondly, we quantify how much cloud retrieval algorithm assumptions alias into cloud optical retrieval trends by starting with the largest of these biases: the plane-parallel assumption in cloud optical thickness (τC) retrievals. First, we collect liquid water cloud fields obtained from Multi-angle Imaging Spectroradiometer (MISR) measurements to construct realistic probability distribution functions (PDFs) of 3D cloud anisotropy (a measure of the degree to which clouds depart from plane-parallel) for different ISCCP cloud types. Next, we will conduct a theoretical study with dynamically simulated cloud fields and a 3D radiative transfer model to determine the relationship between 3D cloud anisotropy and 3D τC bias for each cloud type. Combining these results provides distributions of 3D τC bias by cloud type. Finally, we will estimate the change in frequency of occurrence of cloud types between two decades and will have the information needed to calculate the total change in 3D optical thickness bias between two decades. If we uncover aliases in this study, the results will motivate the development and rigorous testing of climate specific cloud retrieval algorithms.

  7. Truncation of Spherical Harmonic Series and its Influence on Gravity Field Modelling

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Gruber, T.; Rummel, R.

    2009-04-01

    Least-squares adjustment is a very common and effective tool for the calculation of global gravity field models in terms of spherical harmonic series. However, since the gravity field is a continuous field function its optimal representation by a finite series of spherical harmonics is connected with a set of fundamental problems. Particularly worth mentioning here are cut off errors and aliasing effects. These problems stem from the truncation of the spherical harmonic series and from the fact that the spherical harmonic coefficients cannot be determined independently of each other within the adjustment process in case of discrete observations. The latter is shown by the non-diagonal variance-covariance matrices of gravity field solutions. Sneeuw described in 1994 that the off-diagonal matrix elements - at least if data are equally weighted - are the result of a loss of orthogonality of Legendre polynomials on regular grids. The poster addresses questions arising from the truncation of spherical harmonic series in spherical harmonic analysis and synthesis. Such questions are: (1) How does the high frequency data content (outside the parameter space) affect the estimated spherical harmonic coefficients; (2) Where to truncate the spherical harmonic series in the adjustment process in order to avoid high frequency leakage?; (3) Given a set of spherical harmonic coefficients resulting from an adjustment, what is the effect of using only a truncated version of it?

  8. Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.

    PubMed

    Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-09-18

    High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  9. High-accuracy 3D Fourier forward modeling of gravity field based on the Gauss-FFT technique

    NASA Astrophysics Data System (ADS)

    Zhao, Guangdong; Chen, Bo; Chen, Longwei; Liu, Jianxin; Ren, Zhengyong

    2018-03-01

    The 3D Fourier forward modeling of 3D density sources is capable of providing 3D gravity anomalies coincided with the meshed density distribution within the whole source region. This paper firstly derives a set of analytical expressions through employing 3D Fourier transforms for calculating the gravity anomalies of a 3D density source approximated by right rectangular prisms. To reduce the errors due to aliasing and imposed periodicity as well as edge effects in the Fourier domain modeling, we develop the 3D Gauss-FFT technique to the 3D gravity anomalies forward modeling. The capability and adaptability of this scheme are tested by simple synthetic models. The results show that the accuracy of the Fourier forward methods using the Gauss-FFT with 4 Gaussian-nodes (or more) is comparable to that of the spatial modeling. In addition, the "ghost" source effects in the 3D Fourier forward gravity field due to imposed periodicity of the standard FFT algorithm are remarkably depressed by the application of the 3D Gauss-FFT algorithm. More importantly, the execution times of the 4 nodes Gauss-FFT modeling are reduced by two orders of magnitude compared with the spatial forward method. It demonstrates that the improved Fourier method is an efficient and accurate forward modeling tool for the gravity field.

  10. Two-dimensional energy spectra in a high Reynolds number turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Chandran, Dileep; Baidya, Rio; Monty, Jason; Marusic, Ivan

    2016-11-01

    The current study measures the two-dimensional (2D) spectra of streamwise velocity component (u) in a high Reynolds number turbulent boundary layer for the first time. A 2D spectra shows the contribution of streamwise (λx) and spanwise (λy) length scales to the streamwise variance at a given wall height (z). 2D spectra could be a better tool to analyse spectral scaling laws as it is devoid of energy aliasing errors that could be present in one-dimensional spectra. A novel method is used to calculate the 2D spectra from the 2D correlation of u which is obtained by measuring velocity time series at various spanwise locations using hot-wire anemometry. At low Reynolds number, the shape of the 2D spectra at a constant energy level shows λy √{ zλx } behaviour at larger scales which is in agreement with the literature. However, at high Reynolds number, it is observed that the square-root relationship gradually transforms into a linear relationship (λy λx) which could be caused by the large packets of eddies whose length grows proportionately to the growth of its width. Additionally, we will show that this linear relationship observed at high Reynolds number is consistent with attached eddy predictions. The authors gratefully acknowledge the support from the Australian Research Council.

  11. Using HFMEA to assess potential for patient harm from tubing misconnections.

    PubMed

    Kimehi-Woods, Judy; Shultz, John P

    2006-07-01

    Reported cases of tubing misconnections and other tubing errors prompted Columbus Children's Hospital to study their potential for harm in its patient population. A Health Failure Mode and Effects Analysis (HFMEA) was conducted in October 2004 to determine the risks inherent in the use and labeling of various enteral, parenteral, and other tubing types in patient care and the potential for patient harm. An assessment of the practice culture revealed considerable variability among nurses and respiratory therapists within and between units. Work on an HFMEA culminated in recommendations of risk reduction strategies. These included standardizing the process of labeling of tubing throughout the organization, developing an online pictorial catalog to list available tubing supplies with all aliases used by staff, and conducting an inventory of all supplies to identify products that need to be purchased or discontinued. Three groups are working on implementing each of the recommendations. Most of the results already realized occurred in labeling of tubing. The pediatric intensive care unit labels all tubing with infused medications 85% of the time; tubings inserted during surgery or in interventional radiology are labeled 53% and 93% of the time. Pocket-size cards with printed labels were tested in three units. This proactive risk assessment project has identified failure modes and possible causes and solutions; several recommendations have been implemented. No tubing misconnections have been reported.

  12. Systematic study of error sources in supersonic skin-friction balance measurements

    NASA Technical Reports Server (NTRS)

    Allen, J. M.

    1976-01-01

    An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.

  13. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  14. Low Noise Infrasonic Sensor System with High Reduction of Natural Background Noise

    DTIC Science & Technology

    2006-05-01

    local processing allows a variety of options both in the array geometry and signal processing. A generic geometry is indicated in Figure 2. Geometric...higher frequency sound detected . Table 1 provides a comparison of piezocable and microbarograph based arrays . Piezocable Sensor Local Signal ...aliasing associated with the current infrasound sensors used at large spacing in the present designs of infrasound monitoring arrays , particularly in the

  15. Finite Element Analysis of Lamb Waves Acting within a Thin Aluminum Plate

    DTIC Science & Technology

    2007-09-01

    signal to avoid time aliasing % LambWaveMode % lamb wave mode to simulate; use proper phase velocity curve % thickness % thickness of...analysis of the simulated signal response data demonstrated that elevated temperatures delay wave propagation, although the delays are minimal at the...Echo Techniques Ultrasonic NDE techniques are based on the propagation and reflection of elastic waves , with the assumption that damage in the

  16. Exploring the Acoustic Nonlinearity for Monitoring Complex Aerospace Structures

    DTIC Science & Technology

    2008-02-27

    nonlinear elastic waves, embedded ultrasonics, nonlinear diagnostics, aerospace structures, structural joints. 16. SECURITY CLASSIFICATION OF: 17...sampling, 100 MHz bandwidth with noise and anti- aliasing filters, general-purpose alias-protected decimation for all sample rates and quad digital down...conversion ( DDC ) with up to 40 MHz IF bandwidth. Specified resolution of NI PXI 5142 is 14-bits with the noise floor approaching -85 dB. Such a

  17. An Evaluation of the TRIPS Computer System (Extended Technical Report)

    DTIC Science & Technology

    2008-07-08

    Mario Marino Nitya Ranganathan Behnam Robatmili Aaron Smith James Burrill Stephen W. Keckler Doug Burger Kathryn S. McKinley Computer Architecture and...Marino, Nitya Ranganathan , Behnam Robatmili, Aaron Smith, James Burrill, Stephen W. Keckler, Doug Burger, Kathryn S. McKinley; ASPLOS 2009, Washington DC...aggressively register allo- cate more memory accesses by using programmer knowledge about pointer aliasing, much of which may be automated. They also

  18. Context dependent anti-aliasing image reconstruction

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.; Hunt, A.; Arlia, N.

    1989-01-01

    Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.

  19. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  20. POCS-based reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE): a general algorithm for reducing motion-related artifacts

    PubMed Central

    Chu, Mei-Lan; Chang, Hing-Chiu; Chung, Hsiao-Wen; Truong, Trong-Kha; Bashir, Mustafa R.; Chen, Nan-kuei

    2014-01-01

    Purpose A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion weighted imaging (DWI). Theory Images with reduced artifacts are reconstructed with an iterative POCS procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. Methods The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved DWI data corresponding to different k-space trajectories and matrix condition numbers. Results Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. Conclusion POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods. PMID:25394325

  1. A technology review of time-of-flight photon counting for advanced remote sensing

    NASA Astrophysics Data System (ADS)

    Lamb, Robert A.

    2010-04-01

    Time correlated single photon counting (TCSPC) has made tremendous progress during the past ten years enabling improved performance in precision time-of-flight (TOF) rangefinding and lidar. In this review the development and performance of several ranging systems is presented that use TCSPC for accurate ranging and range profiling over distances up to 17km. A range resolution of a few millimetres is routinely achieved over distances of several kilometres. These systems include single wavelength devices operating in the visible; multi-wavelength systems covering the visible and near infra-red; the use of electronic gating to reduce in-band solar background and, most recently, operation at high repetition rates without range aliasing- typically 10MHz over several kilometres. These systems operate at very low optical power (<100μW). The technique therefore has potential for eye-safe lidar monitoring of the environment and obvious military, security and surveillance sensing applications. The review will highlight the theoretical principles of photon counting and progress made in developing absolute ranging techniques that enable high repetition rate data acquisition that avoids range aliasing. Technology trends in TCSPC rangefinding are merging with those of quantum cryptography and its future application to revolutionary quantum imaging provides diverse and exciting research into secure covert sensing, ultra-low power active imaging and quantum rangefinding.

  2. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  3. A new multiscale noise tuning stochastic resonance for enhanced fault diagnosis in wind turbine drivetrains

    NASA Astrophysics Data System (ADS)

    Hu, Bingbing; Li, Bing

    2016-02-01

    It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.

  4. [Object Separation from Medical X-Ray Images Based on ICA].

    PubMed

    Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun

    2015-03-01

    X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.

  5. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  6. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  7. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  8. Ion beam machining error control and correction for small scale optics.

    PubMed

    Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi

    2011-09-20

    Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.

  9. Comparison of MLC error sensitivity of various commercial devices for VMAT pre-treatment quality assurance.

    PubMed

    Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi

    2018-05-01

    The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  10. Rotation Rate of Saturn's Magnetosphere using CAPS Plasma Measurements

    NASA Technical Reports Server (NTRS)

    Sittler, E.; Cooper, J.; Hartle, R.; Simpson, D.; Johnson, R.; Thomsen, M.; Arridge, C.

    2011-01-01

    We present the present status of an investigation of the rotation rate of Saturn's magnetosphere using a 3D velocity moment technique being developed at Goddard which is similar to the 2D version used by Sittler et al. for SOI and similar to that used by Thomsen et al.. This technique allows one to nearly cover the full energy range of the Cassini Plasma Spectrometer (CAPS) IMS from 1 V . E/Q < 50 kV. Since our technique maps the observations into a local inertial frame, it does work during roll maneuvers. We make comparisons with the bi-Maxwellian fitting technique developed by Wilson et al. and the similar velocity moment technique by Thomsen et al. . We concentrate our analysis when ion composition data is available, which is used to weight the non-compositional data, referred to as singles data, to separate H+, H2+ and water group ions (W+) from each other. The chosen periods have high enough telemetry rates (4 kbps or higher) so that coincidence ion data, similar to that used by Sittler et al. for SOI is available. The ion data set is especially valuable for measuring flow velocities for protons, which are more difficult to derive using singles data within the inner magnetosphere, where the signal is dominated by heavy ions (i.e., proton peak merges with W+ peak as low energy shoulder). Our technique uses a flux function, which is zero in the proper plasma flow frame, to estimate fluid parameter uncertainties. The comparisons investigate the experimental errors and potential for systematic errors in the analyses, including ours. The rolls provide the best data set when it comes to getting 4PI coverage of the plasma but are more susceptible to time aliasing effects. In the future we will then make comparisons with magnetic field observations, Saturn ionosphere conductivities as presently known and the field aligned currents necessary for the planet to enforce corotation of the rotating plasma.

  11. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  12. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  13. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  14. Visualization of 3D CT-based anatomical models

    NASA Astrophysics Data System (ADS)

    Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.

    2018-04-01

    Biomedical volumetric data visualization techniques for the exploration purposes are well developed. Most of the known methods are inappropriate for surgery simulation systems due to lack of realism. A segmented data visualization is a well-known approach for the visualization of the structured volumetric data. The research is focused on improvement of the segmented data visualization technique by the aliasing problems resolution and the use of material transparency modeling for better semitransparent structures rendering.

  15. Spatial Computation

    DTIC Science & Technology

    2003-12-01

    POPL), pages 146–157, 1988 . 207 [HT01] Nevin Heintze and Olivier Tardieu. Ultra-fast aliasing analysis using CLA: A million lines of C code in a second...provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently...RESPONSIBLE PERSON a . REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39

  16. Range safety signal propagation through the SRM exhaust plume of the space shuttle

    NASA Technical Reports Server (NTRS)

    Boynton, F. P.; Davies, A. R.; Rajasekhar, P. S.; Thompson, J. A.

    1977-01-01

    Theoretical predictions of plume interference for the space shuttle range safety system by solid rocket booster exhaust plumes are reported. The signal propagation was calculated using a split operator technique based upon the Fresnel-Kirchoff integral, using fast Fourier transforms to evaluate the convolution and treating the plume as a series of absorbing and phase-changing screens. Talanov's lens transformation was applied to reduce aliasing problems caused by ray divergence.

  17. Morphological demosaicking

    NASA Astrophysics Data System (ADS)

    Quan, Shuxue

    2009-02-01

    Bayer patterns, in which a single value of red, green or blue is available for each pixel, are widely used in digital color cameras. The reconstruction of the full color image is often referred to as demosaicking. This paper introduced a new approach - morphological demosaicking. The approach is based on strong edge directionality selection and interpolation, followed by morphological operations to refine edge directionality selection and reduce color aliasing. Finally performance evaluation and examples of color artifacts reduction are shown.

  18. Ocean Surface Wave Optical Roughness: Analysis of Innovative Measurements

    DTIC Science & Technology

    2013-12-16

    relationship of MSS to wind speed, and at times has shown a reversal of the Cox-Munk linear relationship. Furthermore, we observe measurable changes in...1985]. The variable speed allocation method has the effect of aliasing (cb) to slower waves, thereby increasing the exponent –m. Our analysis based ...RaDyO) program. The primary research goals of the program are to (1) examine time -dependent oceanic radiance distribution in relation to dynamic

  19. Perceptual Performance Impact of GPU-Based WARP and Anti-Aliasing for Image Generators

    DTIC Science & Technology

    2016-06-29

    with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator. This 15-channel, 150...ABSTRACT In 2012 the U.S. Air Force School of Aerospace Medicine, in partnership with the Air Force Research Laboratory (AFRL) and NASA AMES...with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator to evaluate the

  20. Sampling and position effects in the Electronically Steered Thinned Array Radiometer (ESTAR)

    NASA Technical Reports Server (NTRS)

    Katzberg, Stephen J.

    1993-01-01

    A simple engineering level model of the Electronically Steered Thinned Array Radiometer (ESTAR) is developed that allows an identification of the major effects of the sampling process involved with this technique. It is shown that the ESTAR approach is sensitive to aliasing and has a highly non-uniform sensitivity profile. It is further shown that the ESTAR approach is strongly sensitive to position displacements of the low-density sampling antenna elements.

  1. Mathematical and Numerical Analysis in Support of Scientific Research.

    DTIC Science & Technology

    1980-06-30

    Technical Information Service . .. . . . .. ... . . - ,, ,k . , .. SECURITY CLASSIFICATION OF THIS oAGE (Wet, noes Entered) I DOCUMENTATION PAGE REAV...problem of aliasing may (ccur in which the sampling rate is low enough to confuse two or more frequercies in the data. TFhe aiet restA is that they appear...variance provides a measure of the quality of the estimate. Should Al be large, one must consider obtaining R r by employing the FFT approach (Faster and

  2. Response approach to the squeezed-limit bispectrum: application to the correlation of quasar and Lyman-α forest power spectrum

    DOE PAGES

    Chiang, Chi-Ting; Cieplak, Agnieszka M.; Schmidt, Fabian; ...

    2017-06-12

    We present the squeezed-limit bispectrum, which is generated by nonlinear gravitational evolution as well as inflationary physics, measures the correlation of three wavenumbers, in the configuration where one wavenumber is much smaller than the other two. Since the squeezed-limit bispectrum encodes the impact of a large-scale fluctuation on the small-scale power spectrum, it can be understood as how the small-scale power spectrum ``responds'' to the large-scale fluctuation. Viewed in this way, the squeezed-limit bispectrum can be calculated using the response approach even in the cases which do not submit to perturbative treatment. To illustrate this point, we apply this approachmore » to the cross-correlation between the large-scale quasar density field and small-scale Lyman-α forest flux power spectrum. In particular, using separate universe simulations which implement changes in the large-scale density, velocity gradient, and primordial power spectrum amplitude, we measure how the Lyman-α forest flux power spectrum responds to the local, long-wavelength quasar overdensity, and equivalently their squeezed-limit bispectrum. We perform a Fisher forecast for the ability of future experiments to constrain local non-Gaussianity using the bispectrum of quasars and the Lyman-α forest. Combining with quasar and Lyman-α forest power spectra to constrain the biases, we find that for DESI the expected 1-σ constraint is err[f NL]~60. Ability for DESI to measure f NL through this channel is limited primarily by the aliasing and instrumental noise of the Lyman-α forest flux power spectrum. Lastly, the combination of response approach and separate universe simulations provides a novel technique to explore the constraints from the squeezed-limit bispectrum between different observables.« less

  3. Response approach to the squeezed-limit bispectrum: application to the correlation of quasar and Lyman-α forest power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Chi-Ting; Cieplak, Agnieszka M.; Schmidt, Fabian

    We present the squeezed-limit bispectrum, which is generated by nonlinear gravitational evolution as well as inflationary physics, measures the correlation of three wavenumbers, in the configuration where one wavenumber is much smaller than the other two. Since the squeezed-limit bispectrum encodes the impact of a large-scale fluctuation on the small-scale power spectrum, it can be understood as how the small-scale power spectrum ``responds'' to the large-scale fluctuation. Viewed in this way, the squeezed-limit bispectrum can be calculated using the response approach even in the cases which do not submit to perturbative treatment. To illustrate this point, we apply this approachmore » to the cross-correlation between the large-scale quasar density field and small-scale Lyman-α forest flux power spectrum. In particular, using separate universe simulations which implement changes in the large-scale density, velocity gradient, and primordial power spectrum amplitude, we measure how the Lyman-α forest flux power spectrum responds to the local, long-wavelength quasar overdensity, and equivalently their squeezed-limit bispectrum. We perform a Fisher forecast for the ability of future experiments to constrain local non-Gaussianity using the bispectrum of quasars and the Lyman-α forest. Combining with quasar and Lyman-α forest power spectra to constrain the biases, we find that for DESI the expected 1-σ constraint is err[f NL]~60. Ability for DESI to measure f NL through this channel is limited primarily by the aliasing and instrumental noise of the Lyman-α forest flux power spectrum. Lastly, the combination of response approach and separate universe simulations provides a novel technique to explore the constraints from the squeezed-limit bispectrum between different observables.« less

  4. Response approach to the squeezed-limit bispectrum: application to the correlation of quasar and Lyman-α forest power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Chi-Ting; Cieplak, Agnieszka M.; Slosar, Anže

    The squeezed-limit bispectrum, which is generated by nonlinear gravitational evolution as well as inflationary physics, measures the correlation of three wavenumbers, in the configuration where one wavenumber is much smaller than the other two. Since the squeezed-limit bispectrum encodes the impact of a large-scale fluctuation on the small-scale power spectrum, it can be understood as how the small-scale power spectrum ''responds'' to the large-scale fluctuation. Viewed in this way, the squeezed-limit bispectrum can be calculated using the response approach even in the cases which do not submit to perturbative treatment. To illustrate this point, we apply this approach to themore » cross-correlation between the large-scale quasar density field and small-scale Lyman-α forest flux power spectrum. In particular, using separate universe simulations which implement changes in the large-scale density, velocity gradient, and primordial power spectrum amplitude, we measure how the Lyman-α forest flux power spectrum responds to the local, long-wavelength quasar overdensity, and equivalently their squeezed-limit bispectrum. We perform a Fisher forecast for the ability of future experiments to constrain local non-Gaussianity using the bispectrum of quasars and the Lyman-α forest. Combining with quasar and Lyman-α forest power spectra to constrain the biases, we find that for DESI the expected 1−σ constraint is err[ f {sub NL}]∼60. Ability for DESI to measure f {sub NL} through this channel is limited primarily by the aliasing and instrumental noise of the Lyman-α forest flux power spectrum. The combination of response approach and separate universe simulations provides a novel technique to explore the constraints from the squeezed-limit bispectrum between different observables.« less

  5. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  6. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  7. Design Method For Ultra-High Resolution Linear CCD Imagers

    NASA Astrophysics Data System (ADS)

    Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan

    1984-11-01

    This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.

  8. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  9. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  10. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  11. A wireless reflectance pulse oximeter with digital baseline control for unfiltered photoplethysmograms.

    PubMed

    Li, Kejia; Warren, Steve

    2012-06-01

    Pulse oximeters are central to the move toward wearable health monitoring devices and medical electronics either hosted by, e.g., smart phones or physically embedded in their design. This paper presents a small, low-cost pulse oximeter design appropriate for wearable and surface-based applications that also produces quality, unfiltered photo-plethysmograms (PPGs) ideal for emerging diagnostic algorithms. The design's "filter-free" embodiment, which employs only digital baseline subtraction as a signal compensation mechanism, distinguishes it from conventional pulse oximeters that incorporate filters for signal extraction and noise reduction. This results in high-fidelity PPGs with thousands of peak-to-peak digitization levels that are sampled at 240 Hz to avoid noise aliasing. Electronic feedback controls make these PPGs more resilient in the face of environmental changes (e.g., the device can operate in full room light), and data stream in real time across either a ZigBee wireless link or a wired USB connection to a host. On-board flash memory is available for store-and-forward applications. This sensor has demonstrated an ability to gather high-integrity data at fingertip, wrist, earlobe, palm, and temple locations from a group of 48 subjects (20 to 64 years old).

  12. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  13. Antenna induced range smearing in MST radars

    NASA Technical Reports Server (NTRS)

    Watkins, B. J.; Johnston, P. E.

    1984-01-01

    There is considerable interest in developing stratosphere troposphere (ST) and mesosphere stratosphere troposphere (MST) radars for higher resolution to study small-scale turbulent structures and waves. At present most ST and MST radars have resolutions of 150 meters or larger, and are not able to distinguish the thin (40 - 100 m) turbulent layers that are known to occur in the troposphere and stratosphere, and possibly in the mesosphere. However the antenna beam width and sidelobe level become important considerations for radars with superior height resolution. The objective of this paper is to point out that for radars with range resolutions of about 150 meters or less, there may be significant range smearing of the signals from mesospheric altitudes due to the finite beam width of the radar antenna. At both stratospheric and mesospheric heights the antenna sidelobe level for lear equally spaced phased arrays may also produce range aliased signals. To illustrate this effect the range smearing functions for two vertically directed antennas have been calculated, (1) an array of 32 coaxial-collinear strings each with 48 elements that simulates the vertical beam of the Poker Flat, Glaska, MST radar; and (2) a similar, but smaller, array of 16 coaxial-collinear strings each with 24 elements.

  14. Anti-alias filter in AORSA for modeling ICRF heating of DT plasmas in ITER

    NASA Astrophysics Data System (ADS)

    Berry, L. A.; Batchelor, D. B.; Jaeger, E. F.; RF SciDAC Team

    2011-10-01

    The spectral wave solver AORSA has been used extensively to model full-field, ICRF heating scenarios for DT plasmas in ITER. In these scenarios, the tritium (T) second harmonic cyclotron resonance is positioned near the magnetic axis, where fast magnetosonic waves are efficiently absorbed by tritium ions. In some cases, a fundamental deuterium (D) cyclotron layer can also be located within the plasma, but close to the high field boundary. In this case, the existence of multiple ion cyclotron resonances presents a serious challenge for numerical simulation because short-wavelength, mode-converted waves can be excited close to the plasma edge at the ion-ion hybrid layer. Although the left hand circularly polarized component of the wave field is partially shielded from the fundamental D resonance, some power penetrates, and a small fraction (typically <10%) can be absorbed by the D ions. We find that an anti-aliasing filter is required in AORSA to calculate this fraction correctly while including up-shift and down-shift in the parallel wave spectrum. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  15. First measurements of error fields on W7-X using flux surface mapping

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less

  16. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  17. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less

  18. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  19. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  20. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  1. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  2. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE PAGES

    Chipman, Hugh A.; Hamada, Michael S.

    2016-06-02

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  3. Electric Fuel Pump Condition Monitor System Using Electricalsignature Analysis

    DOEpatents

    Haynes, Howard D [Knoxville, TN; Cox, Daryl F [Knoxville, TN; Welch, Donald E [Oak Ridge, TN

    2005-09-13

    A pump diagnostic system and method comprising current sensing probes clamped on electrical motor leads of a pump for sensing only current signals on incoming motor power, a signal processor having a means for buffering and anti-aliasing current signals into a pump motor current signal, and a computer having a means for analyzing, displaying, and reporting motor current signatures from the motor current signal to determine pump health using integrated motor and pump diagnostic parameters.

  4. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  5. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, Hugh A.; Hamada, Michael S.

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  6. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  7. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  8. Optical and Radio Frequency Refractivity Fluctuations from High Resolution Point Sensors: Sea Breezes and Other Observations

    DTIC Science & Technology

    2007-03-01

    velocity and direction along with vertical velocities are derived from the measured time of flight for the ultrasonic signals (manufacture’s...data set. To prevent aliasing a wave must be sample at least twice per period so the Nyquist frequency is sn ff 2 = . 3. Sampling Requirements...an order of magnitude or more. To refine models or conduct climatologically studies for Cn2 requires direct measurements to identify the underlying

  9. Correspondence Search Mitigation Using Feature Space Anti-Aliasing

    DTIC Science & Technology

    2007-01-01

    trackers are widely used in astro -inertial nav- igation systems for long-range aircraft, space navigation, and ICBM guidance. When ground images are to be...frequency domain representation of the point spread function, H( fx , fy), is called the optical transfer function. Applying the Fourier transform to the...frequency domain representation of the image: I( fx , fy, t) = O( fx , fy, t)H( fx , fy) (4) In most conditions, the projected scene can be treated as a

  10. University of Glasgow at TREC 2009: Experiments with Terrier

    DTIC Science & Technology

    2009-11-01

    identify entities in the category B subset of the corpus, we resort to an efficient dictionary -based named en- tity recognition approach.4 In particular...we build a large dictio- nary of entity names using DBPedia,5 a structured representation of Wikipedia. Dictionary entries comprise all known...aliases for each unique entity, as obtained from DBPedia (e.g., ‘Barack Obama’ is represented by the dictionary entries ‘Barack Obama’ and ‘44th President

  11. Golden-ratio rotated stack-of-stars acquisition for improved volumetric MRI.

    PubMed

    Zhou, Ziwu; Han, Fei; Yan, Lirong; Wang, Danny J J; Hu, Peng

    2017-12-01

    To develop and evaluate an improved stack-of-stars radial sampling strategy for reducing streaking artifacts. The conventional stack-of-stars sampling strategy collects the same radial angle for every partition (slice) encoding. In an undersampled acquisition, such an aligned acquisition generates coherent aliasing patterns and introduces strong streaking artifacts. We show that by rotating the radial spokes in a golden-angle manner along the partition-encoding direction, the aliasing pattern is modified, resulting in improved image quality for gridding and more advanced reconstruction methods. Computer simulations were performed and phantom as well as in vivo images for three different applications were acquired. Simulation, phantom, and in vivo experiments confirmed that the proposed method was able to generate images with less streaking artifact and sharper structures based on undersampled acquisitions in comparison with the conventional aligned approach at the same acceleration factors. By combining parallel imaging and compressed sensing in the reconstruction, streaking artifacts were mostly removed with improved delineation of fine structures using the proposed strategy. We present a simple method to reduce streaking artifacts and improve image quality in 3D stack-of-stars acquisitions by re-arranging the radial spoke angles in the 3D partition direction, which can be used for rapid volumetric imaging. Magn Reson Med 78:2290-2298, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Characterization and Reduction of Cardiac- and Respiratory-Induced Noise as a Function of the Sampling Rate (TR) in fMRI

    PubMed Central

    Cordes, Dietmar; Nandy, Rajesh R.; Schafer, Scott; Wager, Tor D.

    2014-01-01

    It has recently been shown that both high-frequency and low-frequency cardiac and respiratory noise sources exist throughout the entire brain and can cause significant signal changes in fMRI data. It is also known that the brainstem, basal forebrain and spinal cord area are problematic for fMRI because of the magnitude of cardiac-induced pulsations at these locations. In this study, the physiological noise contributions in the lower brain areas (covering the brainstem and adjacent regions) are investigated and a novel method is presented for computing both low-frequency and high-frequency physiological regressors accurately for each subject. In particular, using a novel optimization algorithm that penalizes curvature (i.e. the second derivative) of the physiological hemodynamic response functions, the cardiac -and respiratory-related response functions are computed. The physiological noise variance is determined for each voxel and the frequency-aliasing property of the high-frequency cardiac waveform as a function of the repetition time (TR) is investigated. It is shown that for the brainstem and other brain areas associated with large pulsations of the cardiac rate, the temporal SNR associated with the low-frequency range of the BOLD response has maxima at subject-specific TRs. At these values, the high-frequency aliased cardiac rate can be eliminated by digital filtering without affecting the BOLD-related signal. PMID:24355483

  13. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  14. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data.

    PubMed

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2018-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  15. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  16. A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer

    PubMed Central

    Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie

    2014-01-01

    Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727

  17. [EMD Time-Frequency Analysis of Raman Spectrum and NIR].

    PubMed

    Zhao, Xiao-yu; Fang, Yi-ming; Tan, Feng; Tong, Liang; Zhai, Zhe

    2016-02-01

    This paper analyzes the Raman spectrum and Near Infrared Spectrum (NIR) with time-frequency method. The empirical mode decomposition spectrum becomes intrinsic mode functions, which the proportion calculation reveals the Raman spectral energy is uniform distributed in each component, while the NIR's low order intrinsic mode functions only undertakes fewer primary spectroscopic effective information. Both the real spectrum and numerical experiments show that the empirical mode decomposition (EMD) regard Raman spectrum as the amplitude-modulated signal, which possessed with high frequency adsorption property; and EMD regards NIR as the frequency-modulated signal, which could be preferably realized high frequency narrow-band demodulation during first-order intrinsic mode functions. The first-order intrinsic mode functions Hilbert transform reveals that during the period of empirical mode decomposes Raman spectrum, modal aliasing happened. Through further analysis of corn leaf's NIR in time-frequency domain, after EMD, the first and second orders components of low energy are cut off, and reconstruct spectral signal by using the remaining intrinsic mode functions, the root-mean-square error is 1.001 1, and the correlation coefficient is 0.981 3, both of these two indexes indicated higher accuracy in re-construction; the decomposition trend term indicates the absorbency is ascending along with the decreasing to wave length in the near-infrared light wave band; and the Hilbert transform of characteristic modal component displays, 657 cm⁻¹ is the specific frequency by the corn leaf stress spectrum, which could be regarded as characteristic frequency for identification.

  18. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  20. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  1. 76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-22

    .... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...

  2. Signal Processing Algorithms for the Terminal Doppler Weather Radar: Build 2

    DTIC Science & Technology

    2010-04-30

    the various TDWR base data quality issues, range-velocity (RV) ambiguity was deemed to be the most severe challenge nationwide. Compared to S - band ... power is computed as PN = median(|5«| 2)/(ln 2), where s is the complex I&Q signal, k is the range gate number, and / is the pulse time index. The...frequencies to the ground-clutter band around zero, the clutter filtering also removes power from the aliased frequencies and distorts the phase response

  3. Hierarchical rendering of trees from precomputed multi-layer z-buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, N.

    1996-02-01

    Chen and Williams show how precomputed z-buffer images from different fixed viewing positions can be reprojected to produce an image for a new viewpoint. Here images are precomputed for twigs and branches at various levels in the hierarchical structure of a tree, and adaptively combined, depending on the position of the new viewpoint. The precomputed images contain multiple z levels to avoid missing pixels in the reconstruction, subpixel masks for anti-aliasing, and colors and normals for shading after reprojection.

  4. SEASAT-A SASS wind processing

    NASA Technical Reports Server (NTRS)

    Langland, R. A.; Stephens, P. L.; Pihos, G. G.

    1980-01-01

    The techniques used for ingesting SEASAT-A SASS wind retrievals into the existing operational software are described. The intent is to assess the impact of SEASAT data in he marine wind fields produced by the global marine wind/sea level pressure analysis. This analysis is performed on a 21/2 deg latitude/longitude global grid which executes at three hourly time increments. Wind fields with and without SASS winds are being compared. The problems of data volume reduction and aliased wind retrieval ambiquity are treated.

  5. Multi-Mode, Multi-Antenna Software Defined Radar for Adaptive Tracking and Identification of Targets in Urban Environments

    DTIC Science & Technology

    2011-10-31

    designs with code division multiple access ( CDMA ). Analog chirp filters were used to produce an up-chirp, which is used as a radar waveform, coupled with...signals. A potential shortcoming of CDMA techniques is that the addition of two signals will result in a non-constant amplitude signal which will be...of low-frequency A/ Ds . As an example for a multiple carrier signal all the received signals from the multiple carriers are aliased onto the

  6. ASPRS Digital Imagery Guideline Image Gallery Discussion

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    2002-01-01

    The objectives of the image gallery are to 1) give users and providers a simple means of identifying appropriate imagery for a given application/feature extraction; and 2) define imagery sufficiently to be described in engineering and acquisition terms. This viewgraph presentation includes a discussion of edge response and aliasing for image processing, and a series of images illustrating the effects of signal to noise ratio (SNR) on images. Another series of images illustrates how images are affected by varying the ground sample distances (GSD).

  7. Application of Mathematical Signal Processing Techniques to Mission Systems. (l’Application des techniques mathematiques du traitement du signal aux systemes de conduite des missions)

    DTIC Science & Technology

    1999-11-01

    represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1

  8. Rational manipulation of digital EEG: pearls and pitfalls.

    PubMed

    Seneviratne, Udaya

    2014-12-01

    The advent of digital EEG has provided greater flexibility and more opportunities in data analysis to optimize the diagnostic yield. Changing the filter settings, sensitivity, montages, and time-base are possible rational manipulations to achieve this goal. The options to use polygraphy, video, and quantification are additional useful features. Aliasing and loss of data are potential pitfalls in the use of digital EEG. This review illustrates some common clinical scenarios where rational manipulations can enhance the diagnostic EEG yield and potential pitfalls in the process.

  9. Probleme bei der Digitalisierung analoger Messwerte

    NASA Astrophysics Data System (ADS)

    Plaßmann, Wilfried

    Messwerte liegen häufig in analoger Form als Spannungswerte vor. Sie werden in eine digital kodierte Form umgesetzt, wenn eine (nahezu) fehlerfreie Übertragung erforderlich ist, wenn Signalverläufe gespeichert werden sollen, wenn eine Weiterverarbeitung erfolgen soll oder wenn Messungen mit sehr geringem Messfehler notwendig sind. Hier soll auf einige Probleme, die durch die Umsetzung entstehen, aus messtechnischer Sicht eingegangen werden. Stichworte: Fehler bei der Digitalisierung; Signal-Quantisierungsgeräusch-Abstand; Verbesserung des Signal-Rausch-Verhältnisses; Abtast-Halte-Glied; Aliasing; Erfassung von Momentanwerten.

  10. Integration Toolkit and Methods (ITKM) Corporate Data Integration Tools (CDIT). Review of the State-of-the-Art with Respect to Integration Toolkits and Methods (ITKM)

    DTIC Science & Technology

    1992-06-01

    system capabilities \\Jch as memory management and network communications are provided by a virtual machine-type operating environment. Various human ...thinking. The elements of this substrate include representational formality, genericity, a method of formal analysis, and augmentation of human analytical...the form of identifying: the data entity itself; its aliases (including how the data is presented th programs or human users in the form of copy

  11. Hierarchical image coding with diamond-shaped sub-bands

    NASA Technical Reports Server (NTRS)

    Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken

    1992-01-01

    We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.

  12. Effects of Random Circuit Fabrication Errors on Small Signal Gain and on Output Phase In a Traveling Wave Tube

    NASA Astrophysics Data System (ADS)

    Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.

    2011-10-01

    Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.

  13. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  14. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  15. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  16. Small refractive errors--their correction and practical importance.

    PubMed

    Skrbek, Matej; Petrová, Sylvie

    2013-04-01

    Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.

  17. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, P. N.

    1974-01-01

    Focal plane sensors for determining the error in a telescope wavefront were investigated. The construction of three candidate test instruments and their evaluation in terms of small wavefront error aberration measurements are described. A laboratory wavefront simulator was designed and fabricated to evaluate the test instruments. The laboratory wavefront error simulator was used to evaluate three tests; a Hartmann test, a polarization shearing interferometer test, and an interferometric Zernike test.

  18. Trial-to-trial adaptation in control of arm reaching and standing posture

    PubMed Central

    Pienciak-Siewert, Alison; Horan, Dylan P.

    2016-01-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888

  19. Trial-to-trial adaptation in control of arm reaching and standing posture.

    PubMed

    Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A

    2016-12-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.

  20. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S; Chao, C; Columbia University, NY, NY

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less

  1. SWOT: A high-resolution wide-swath altimetry mission for oceanography and hydrology

    NASA Astrophysics Data System (ADS)

    Morrow, Rosemary; Fu, Lee-Lueng; Rodriguez, Ernesto

    2013-04-01

    A new satellite mission called Surface Water and Ocean Topography (SWOT) has been developed jointly by the U.S. National Aeronautics and Space Administration and France's Centre National d'Etudes Spatiales. Based on the success of nadir-looking altimetry missions in the past, SWOT will use the technique of radar interferometry to make wide-swath altimetric measurements of the elevation of surface water on land and the ocean's surface topography. The new measurements will provide information on the changing ocean currents that are key to the prediction of climate change, as well as the shifting fresh water resources resulting from climate change. Conventional satellite altimetry has revolutionized oceanography by providing nearly two decades' worth of global measurements of ocean surface topography. However, the noise level of radar altimeters limits the along-track spatial resolution to 50-100 km over the oceans. The large spacing between the satellite ground tracks limits the resolution of 2D gridded data to 200 km. Yet most of the kinetic energy of ocean circulation takes place at the scales unresolved by conventional altimetry. About 50% of the vertical transfer of heat and chemical properties of the ocean (e.g., dissolved CO2 and nutrients) is also accomplished by processes at these scales. SWOT observations will provide the critical new information at these scales for developing and testing ocean models that are designed for predicting future climate change. SWOT measurements will be in Ka band (~35 GHZ), chosen for the radar to achieve high precision with a much shorter inteferometry baseline of 10 m. Small look angles (~ 4 degrees) are required to minimize elevation errors, which limits the swath width to 120 km. An orbit with inclination of 78 degrees and 22 day repeat period was chosen for gapless coverage and good tidal aliasing properties. With this configuration, SWOT is expected to achieve 1 cm precision at 1 km x 1 km pixels over the ocean and 10 cm precision over 50 m x 50 m pixels over land waters. This presentation will be in two parts. Firstly we will give a brief overview of the SWOT mission and its sampling characteristics. We will then introduce a number of recent scientific results on our present understanding of ocean topography and surface geostropic velocities at mesoscales and sub-mesoscales, results which have been inspired by the upcoming SWOT measurements.

  2. TOPEX/POSEIDON tides estimated using a global inverse model

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.

    1994-01-01

    Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better agreement with altimetric and ground truth data, than previously proposed tidal models. Relative to the 'default' tidal corrections provided with the TOPEX/POSEIDON GDR, the inverse solution reduces crossover difference variances significantly (approximately 20-30%), even though only a small number of free parameters (approximately equal to 1000) are actually fit to the crossover data.

  3. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  4. Assessment of the generalization of learned image reconstruction and the potential for transfer learning.

    PubMed

    Knoll, Florian; Hammernik, Kerstin; Kobler, Erich; Pock, Thomas; Recht, Michael P; Sodickson, Daniel K

    2018-05-17

    Although deep learning has shown great promise for MR image reconstruction, an open question regarding the success of this approach is the robustness in the case of deviations between training and test data. The goal of this study is to assess the influence of image contrast, SNR, and image content on the generalization of learned image reconstruction, and to demonstrate the potential for transfer learning. Reconstructions were trained from undersampled data using data sets with varying SNR, sampling pattern, image contrast, and synthetic data generated from a public image database. The performance of the trained reconstructions was evaluated on 10 in vivo patient knee MRI acquisitions from 2 different pulse sequences that were not used during training. Transfer learning was evaluated by fine-tuning baseline trainings from synthetic data with a small subset of in vivo MR training data. Deviations in SNR between training and testing led to substantial decreases in reconstruction image quality, whereas image contrast was less relevant. Trainings from heterogeneous training data generalized well toward the test data with a range of acquisition parameters. Trainings from synthetic, non-MR image data showed residual aliasing artifacts, which could be removed by transfer learning-inspired fine-tuning. This study presents insights into the generalization ability of learned image reconstruction with respect to deviations in the acquisition settings between training and testing. It also provides an outlook for the potential of transfer learning to fine-tune trainings to a particular target application using only a small number of training cases. © 2018 International Society for Magnetic Resonance in Medicine.

  5. Design of an all-attitude flight control system to execute commanded bank angles and angles of attack

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Eggleston, D. M.

    1976-01-01

    A flight control system for use in air-to-air combat simulation was designed. The input to the flight control system are commanded bank angle and angle of attack, the output are commands to the control surface actuators such that the commanded values will be achieved in near minimum time and sideslip is controlled to remain small. For the longitudinal direction, a conventional linear control system with gains scheduled as a function of dynamic pressure is employed. For the lateral direction, a novel control system, consisting of a linear portion for small bank angle errors and a bang-bang control system for large errors and error rates is employed.

  6. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  7. Tuning of successively scanned two monolithic Vernier-tuned lasers and selective data sampling in optical comb swept source optical coherence tomography

    PubMed Central

    Choi, Dong-hak; Yoshimura, Reiko; Ohbayashi, Kohji

    2013-01-01

    Monolithic Vernier tuned super-structure grating distributed Bragg reflector (SSG-DBR) lasers are expected to become one of the most promising sources for swept source optical coherence tomography (SS-OCT) with a long coherence length, reduced sensitivity roll-off, and potential capability for a very fast A-scan rate. However, previous implementations of the lasers suffer from four main problems: 1) frequencies deviate from the targeted values when scanned, 2) large amounts of noise appear associated with abrupt changes in injection currents, 3) optically aliased noise appears due to a long coherence length, and 4) the narrow wavelength coverage of a single chip limits resolution. We have developed a method of dynamical frequency tuning, a method of selective data sampling to eliminate current switching noise, an interferometer to reduce aliased noise, and an excess-noise-free connection of two serially scanned lasers to enhance resolution to solve these problems. An optical frequency comb SS-OCT system was achieved with a sensitivity of 124 dB and a dynamic range of 55-72 dB that depended on the depth at an A-scan rate of 3.1 kHz with a resolution of 15 μm by discretely scanning two SSG-DBR lasers, i.e., L-band (1.560-1.599 μm) and UL-band (1.598-1.640 μm). A few OCT images with excellent image penetration depth were obtained. PMID:24409394

  8. Simplifying and enhancing the use of PyMOL with horizontal scripts

    PubMed Central

    2016-01-01

    Abstract Scripts are used in PyMOL to exert precise control over the appearance of the output and to ease remaking similar images at a later time. We developed horizontal scripts to ease script development. A horizontal script makes a complete scene in PyMOL like a traditional vertical script. The commands in a horizontal script are separated by semicolons. These scripts are edited interactively on the command line with no need for an external text editor. This simpler workflow accelerates script development. In using PyMOL, the illustration of a molecular scene requires an 18‐element matrix of view port settings. The default format spans several lines and is laborious to manually reformat for one line. This default format prevents the fast assembly of horizontal scripts that can reproduce a molecular scene. We solved this problem by writing a function that displays the settings on one line in a compact format suitable for horizontal scripts. We also demonstrate the mapping of aliases to horizontal scripts. Many aliases can be defined in a single script file, which can be useful for applying costume molecular representations to any structure. We also redefined horizontal scripts as Python functions to enable the use of the help function to print documentation about an alias to the command history window. We discuss how these methods of using horizontal scripts both simplify and enhance the use of PyMOL in research and education. PMID:27488983

  9. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  10. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  11. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  12. Development of a 3D VHR seismic reflection system for lacustrine settings - a case study in Lake Geneva, Switzerland

    NASA Astrophysics Data System (ADS)

    Scheidhauer, M.; Dupuy, D.; Marillier, F.; Beres, M.

    2003-04-01

    For better understanding of geologic processes in complex lacustrine settings, detailed information on geologic features is required. In many cases, the 3D seismic method may be the only appropriate approach. The aim of this work is to develop an efficient very high-resolution 3D seismic reflection system for lake studies. In Lake Geneva, Switzerland, near the city of Lausanne, past high-resolution investigations revealed a complex fault zone, which was subsequently chosen for testing our new system of three 24-channel streamers and integrated differential GPS (dGPS) positioning. A survey, carried out in 9 days in August 2001, covered an area of 1500^om x 675^om and comprised 180 CMP lines sailed perpendicular to the fault strike always updip, since otherwise the asymmetric system would result in different stacks for opposite directions. Accurate navigation and shot spacing of 5^om is achieved with a specially developed navigation and shot-triggering software that uses differential GPS onboard and a reference base close to the lake shore. Hydrophone positions could be accurately (<^o0.5^om) calculated with the aid of three additional dGPS antennas mounted on rafts attached to the streamer tails. Towed at a distance of only 75^om behind the vessel, they allowed determination of possible feathering due to cross-line currents or small course variations. The multi-streamer system uses two retractable booms deployed on each side of the boat and rest on floats. They separate the two outer streamers from the one in the center by a distance of 7.5^om. Combined with a receiver spacing of 2.5^om, the bin dimension of the 3D data becomes 3.75^om in cross-line and 1.25^om in inline direction. Potential aliasing problems from steep reflectors up to 30^o within the fault zone motivated the use of a 15/15 cu. in. double-chamber bubble-canceling Mini G.I. air gun (operated at 80^obars and 1^om depth). Although its frequencies do not exceed 650^o Hz, it combines a penetration of non-aliased signal to depths of 400^om with a best vertical resolution of 1.15^om. The multi-streamer system allows acquisition of high quality data, which already after conventional 3D processing show particularly clear images of the fault zone and the overlying sediments in all directions. Prestack depth migration can further improve data quality and is more appropriate for subsequent geologic interpretation.

  13. Prediction of matching condition for a microstrip subsystem using artificial neural network and adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim

    2016-11-01

    In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.

  14. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  15. Medication errors in a rural hospital.

    PubMed

    Madegowda, Bharathi; Hill, Pamela D; Anderson, Mary Ann

    2007-06-01

    The purpose of this investigation was to compare and contrast three nursing shifts in a small rural Midwestern hospital with regard to the number of reported medication errors, the units on which they occurred, and the types and severity of errors. Results can be beneficial in planning and implementing a quality improvement program in the area of medication administration with the nursing staff.

  16. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  17. Changes in Rod and Frame Test Scores Recorded in Schoolchildren during Development – A Longitudinal Study

    PubMed Central

    Bagust, Jeff; Docherty, Sharon; Haynes, Wayne; Telford, Richard; Isableu, Brice

    2013-01-01

    The Rod and Frame Test has been used to assess the degree to which subjects rely on the visual frame of reference to perceive vertical (visual field dependence- independence perceptual style). Early investigations found children exhibited a wide range of alignment errors, which reduced as they matured. These studies used a mechanical Rod and Frame system, and presented only mean values of grouped data. The current study also considered changes in individual performance. Changes in rod alignment accuracy in 419 school children were measured using a computer-based Rod and Frame test. Each child was tested at school Grade 2 and retested in Grades 4 and 6. The results confirmed that children displayed a wide range of alignment errors, which decreased with age but did not reach the expected adult values. Although most children showed a decrease in frame dependency over the 4 years of the study, almost 20% had increased alignment errors suggesting that they were becoming more frame-dependent. Plots of individual variation (SD) against mean error allowed the sample to be divided into 4 groups; the majority with small errors and SDs; a group with small SDs, but alignments clustering around the frame angle of 18°; a group showing large errors in the opposite direction to the frame tilt; and a small number with large SDs whose alignment appeared to be random. The errors in the last 3 groups could largely be explained by alignment of the rod to different aspects of the frame. At corresponding ages females exhibited larger alignment errors than males although this did not reach statistical significance. This study confirms that children rely more heavily on the visual frame of reference for processing spatial orientation cues. Most become less frame-dependent as they mature, but there are considerable individual differences. PMID:23724139

  18. Precision Closed-Loop Orbital Maneuvering System Design and Performance for the Magnetospheric Multi-Scale Mission (MMS) Formation

    NASA Technical Reports Server (NTRS)

    Chai, Dean; Queen, Steve; Placanica, Sam

    2015-01-01

    NASA's Magnetospheric Multi-Scale (MMS) mission successfully launched on March 13, 2015 (UTC) consists of four identically instrumented spin-stabilized observatories that function as a constellation to study magnetic reconnection in space. The need to maintain sufficiently accurate spatial and temporal formation resolution of the observatories must be balanced against the logistical constraints of executing overly-frequent maneuvers on a small fleet of spacecraft. These two considerations make for an extremely challenging maneuver design problem. This paper focuses on the design elements of a 6-DOF spacecraft attitude control and maneuvering system capable of delivering the high-precision adjustments required by the constellation designers---specifically, the design, implementation, and on-orbit performance of the closed-loop formation-class maneuvers that include initialization, maintenance, and re-sizing. The maneuvering control system flown on MMS utilizes a micro-gravity resolution accelerometer sampled at a high rate in order to achieve closed-loop velocity tracking of an inertial target with arc-minute directional and millimeter-per-second magnitude accuracy. This paper summarizes the techniques used for correcting bias drift, sensor-head offsets, and centripetal aliasing in the acceleration measurements. It also discusses the on-board pre-maneuver calibration and compensation algorithms as well as the implementation of the post-maneuver attitude adjustments.

  19. Precision Closed-Loop Orbital Maneuvering System Design and Performance for the Magnetospheric Multiscale Formation

    NASA Technical Reports Server (NTRS)

    Chai, Dean J.; Queen, Steven Z.; Placanica, Samuel J.

    2015-01-01

    NASAs Magnetospheric Multiscale (MMS) mission successfully launched on March 13,2015 (UTC) consists of four identically instrumented spin-stabilized observatories that function as a constellation to study magnetic reconnection in space. The need to maintain sufficiently accurate spatial and temporal formation resolution of the observatories must be balanced against the logistical constraints of executing overly-frequent maneuvers on a small fleet of spacecraft. These two considerations make for an extremely challenging maneuver design problem. This paper focuses on the design elements of a 6-DOF spacecraft attitude control and maneuvering system capable of delivering the high-precision adjustments required by the constellation designers specifically, the design, implementation, and on-orbit performance of the closed-loop formation-class maneuvers that include initialization, maintenance, and re-sizing. The maneuvering control system flown on MMS utilizes a micro-gravity resolution accelerometer sampled at a high rate in order to achieve closed-loop velocity tracking of an inertial target with arc-minute directional and millimeter-per second magnitude accuracy. This paper summarizes the techniques used for correcting bias drift, sensor-head offsets, and centripetal aliasing in the acceleration measurements. It also discusses the on-board pre-maneuver calibration and compensation algorithms as well as the implementation of the post-maneuver attitude adjustments.

  20. An extended Reed Solomon decoder design

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.; Purviance, J.

    1991-01-01

    It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.

  1. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  2. Injecting Errors for Testing Built-In Test Software

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  3. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  4. A NEW INSAR DERIVED DEM OF BLACK RAPIDS GLACIER

    NASA Astrophysics Data System (ADS)

    Shugar, D. H.; Rabus, B.; Clague, J. J.

    2009-12-01

    We have constructed a new digital elevation model representing the 1995 surface of surge-type Black Rapids Glacier and the surrounding central Alaska Range, using ERS-1/2 repeat-pass interferometry. First, we isolated the topographic phase from three interferograms with contrasting perpendicular baselines. Next we attempted to automatically unwrap this topographic phase but encountered numerous errors due to the terrain containing areas of poor coherence from fringe aliasing, radar layover or shadow. We then consistently corrected these persistent phase-unwrapping errors in all three interferograms using an iterative semi-automated approach that capitalizes on the multi-baseline nature of the data set. Over the surface of Black Rapids Glacier, the accuracy of the new DEM is estimated at better than +/- 12 m. Ground-surveyed spot elevations from 1995 corroborate this accuracy estimate. Comparison of the new DEM with a 1951 U.S. Geological Survey topographic map, and with ground survey data from other years, shows the gradual return of Black Rapids Glacier to pre-surge conditions. In the 44-year period between 1951 and 1995 the observed average steepening of the longitudinal profile is ~0.6°. The maximum elevation changes in the ablation and accumulation zones are -256 m and +75 m, respectively, suggesting corresponding average rates of elevation change of about -5.8 m/yr and +1.7 m/yr. These rates are 1.5-2 times higher than those indicated by the ground survey spot elevation measurements over the period 1975 to 2005. Considering the significant overlap of the two periods of measurement, the inferred average rates for 1951-1975 would have to be very large (-7.5 m/yr and +2.3 m/yr, respectively) for these two findings to be consistent. A second comparison with the recently released ASTER G-DEM (data from 2001) led to no glaciologically usable results due to major artifacts in the ASTER G-DEM. We therefore conclude that the 1951 U.S. Geological Survey map and the ASTER G-DEM both appear biased over the Black Rapids Glacier surface and caution is advised when using either for quantitative estimates of elevation change over the glacier surface.

  5. Rotation Rate of Saturn's Magnetosphere using CAPS Plasma Measurements

    NASA Technical Reports Server (NTRS)

    Sittler, E.; Cooper, J.; Simpson, D.; Paterson, W.

    2012-01-01

    We present the present status of an investigation of the rotation rate of Saturn 's magnetosphere using a 3D velocity moment technique being developed at Goddard which is similar to the 2D version used by Sittler et al. (2005) [1] for SOI and similar to that used by Thomsen et al. (2010). This technique allows one to nearly cover the full energy range of the CAPS IMS from 1 V less than or equal to E/Q less than 50 kV. Since our technique maps the observations into a local inertial frame, it does work during roll manoeuvres. We have made comparisons with Wilson et al. (2008) [2] (2005-358 and 2005-284) who performs a bi-Maxwellian fit to the ion singles data and our results are nearly identical. We will also make comparisons with results by Thomsen et al. (2010) [3]. Our analysis uses ion composition data to weight the non-compositional data, referred to as singles data, to separate H+, H2+ and water group ions (W+) from each other. The ion data set is especially valuable for measuring flow velocities for protons, which are more difficult to derive using singles data within the inner magnetosphere, where the signal is dominated by heavy ions (i.e., proton peak merges with W+ peak as low energy shoulder). Our technique uses a flux function, which is zero in the proper plasma flow frame, to estimate fluid parameter uncertainties. The comparisons investigate the experimental errors and potential for systematic errors in the analyses, including ours. The rolls provide the best data set when it comes to getting 4PI coverage of the plasma but are more susceptible to time aliasing effects. Since our analysis is a velocity moments technique it will work within the inner magnetosphere where pickup ions are important and velocity distributions are non-Maxwellian. So, we will present results inside Enceladus' L shell and determine if mass loading is important. In the future we plan to make comparisons with magnetic field observations, use Saturn ionosphere conductivities as presently known and the field aligned currents necessary for the planet to enforce corotation of the rotating plasma.

  6. Understanding radio polarimetry. V. Making matrix self-calibration work: processing of a simulated observation

    NASA Astrophysics Data System (ADS)

    Hamaker, J. P.

    2006-09-01

    Context: .This is Paper V in a series on polarimetric aperture synthesis based on the algebra of 2×2 matrices. Aims: .It validates the matrix self-calibration theory of the preceding Paper IV and outlines the algorithmic methods that had to be developed for its application. Methods: .New avenues of polarimetric self-calibration opened up in Paper IV are explored by processing a simulated observation. To focus on the polarimetric issues, it is set up so as to sidestep some of the common complications of aperture synthesis, yet properly represent physical conditions. In addition to a representative collection of observing errors, the simulated instrument includes strongly varying Faraday rotation and antennas with unequal feeds. The selfcal procedure is described in detail, including aspects in which it differs from the scalar case, and its effects are demonstrated with a number of intermediate image results. Results: .The simulation's outcome is in full agreement with the theory. The nonlinear matrix equations for instrumental parameters are readily solved by iteration; a convergence problem is easily remedied with a new ancillary algorithm. Instrumental effects are cleanly separated from source properties without reference to changes in parallactic rotation during the observation. Polarimetric images of high purity and dynamic range result. As theory predicts, polarimetric errors that are common to all sources inevitably remain; prior knowledge of the statistics of linear and circular polarization in a typical observed field can be applied to eliminate most of them. Conclusions: .The paper conclusively demonstrates that matrix selfcal per se is a viable method that may foster substantial advancement in the art of radio polarimetry. For its application in real observations, a number of issues must be resolved that matrix selfcal has in common with its scalar sibling, such as the treatment of extended sources and the familiar sampling and aliasing problems. The close analogy between scalar interferometry and its matrix-based generalisation suggests that one may apply well-developed methods of scalar interferometry. Marrying these methods to those of this paper will require a significant investment in new software. Two such developments are known to be foreseen or underway.

  7. Time-resolved fluorescence imaging of slab gels for lifetime base-calling in DNA sequencing applications.

    PubMed

    Lassiter, S J; Stryjewski, W; Legendre, B L; Erdmann, R; Wahl, M; Wurm, J; Peterson, R; Middendorf, L; Soper, S A

    2000-11-01

    A compact time-resolved near-IR fluorescence imager was constructed to obtain lifetime and intensity images of DNA sequencing slab gels. The scanner consisted of a microscope body with f/1.2 relay optics onto which was mounted a pulsed diode laser (repetition rate 80 MHz, lasing wavelength 680 nm, average power 5 mW), filtering optics, and a large photoactive area (diameter 500 microns) single-photon avalanche diode that was actively quenched to provide a large dynamic operating range. The time-resolved data were processed using electronics configured in a conventional time-correlated single-photon-counting format with all of the counting hardware situated on a PC card resident on the computer bus. The microscope head produced a timing response of 450 ps (fwhm) in a scanning mode, allowing the measurement of subnano-second lifetimes. The time-resolved microscope head was placed in an automated DNA sequencer and translated across a 21-cm-wide gel plate in approximately 6 s (scan rate 3.5 cm/s) with an accumulation time per pixel of 10 ms. The sampling frequency was 0.17 Hz (duty cycle 0.0017), sufficient to prevent signal aliasing during the electrophoresis separation. Software (written in Visual Basic) allowed acquisition of both the intensity image and lifetime analysis of DNA bands migrating through the gel in real time. Using a dual-labeling (IRD700 and Cy5.5 labeling dyes)/two-lane sequencing strategy, we successfully read 670 bases of a control M13mp18 ssDNA template using lifetime identification. Comparison of the reconstructed sequence with the known sequence of the phage indicated the number of miscalls was only 2, producing an error rate of approximately 0.3% (identification accuracy 99.7%). The lifetimes were calculated using maximum likelihood estimators and allowed on-line determinations with high precision, even when short integration times were used to construct the decay profiles. Comparison of the lifetime base calling to a single-dye/four-lane sequencing strategy indicated similar results in terms of miscalls, but reduced insertion and deletion errors using lifetime identification methods, improving the overall read accuracy.

  8. [Characteristics of specifications of transportable inverter-type X-ray equipment].

    PubMed

    Yamamoto, Keiichi; Miyazaki, Shigeru; Asano, Hiroshi; Shinohara, Fuminori; Ishikawa, Mitsuo; Ide, Toshinori; Abe, Shinji; Negishi, Toru; Miyake, Hiroyuki; Imai, Yoshio; Okuaki, Tomoyuki

    2003-07-01

    Our X-ray systems study group measured and examined the characteristics of four transportable inverter-type X-ray equipments. X-ray tube voltage and X-ray tube current were measured with the X-ray tube voltage and the X-ray tube current measurement terminals provided with the equipment. X-ray tube voltage, irradiation time, and dose were measured with a non-invasive X-ray tube voltage-measuring device, and X-ray output was measured by fluorescence meter. The items investigated were the reproducibility and linearity of X-ray output, error of pre-set X-ray tube voltage and X-ray tube current, and X-ray tube voltage ripple percentage. The waveforms of X-ray tube voltage, the X-ray tube current, and fluorescence intensity draw were analyzed using the oscilloscope gram and a personal computer. All of the equipment had a preset error of X-ray tube voltage and X-ray tube current that met JIS standards. The X-ray tube voltage ripple percentage of each equipment conformed to the tendency to decrease when X-ray tube voltage increased. Although the X-ray output reproducibility of system A exceeded the JIS standard, the other systems were within the JIS standard. Equipment A required 40 ms for X-ray tube current to reach the target value, and there was some X-ray output loss because of a trough in X-ray tube current. Owing to the influence of the ripple in X-ray tube current, the strength of the fluorescence waveform rippled in equipments B and C. Waveform analysis could not be done by aliasing of the recording device in equipment D. The maximum X-ray tube current of transportable inverter-type X-ray equipment is as low as 10-20 mA, and the irradiation time of chest X-ray photography exceeds 0.1 sec. However, improvement of the radiophotographic technique is required for patients who cannot move their bodies or halt respiration. It is necessary to make the irradiation time of the equipments shorter for remote medical treatment.

  9. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  10. The Gulliver Effect: The Impact of Error in an Elephantine Subpopulation on Estimates for Lilliputian Subpopulations

    ERIC Educational Resources Information Center

    Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene

    2009-01-01

    An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…

  11. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  12. High-Accuracy Measurement of Small Movement of an Object behind Cloth Using Airborne Ultrasound

    NASA Astrophysics Data System (ADS)

    Hoshiba, Kotaro; Hirata, Shinnosuke; Hachiya, Hiroyuki

    2013-07-01

    The acoustic measurement of vital information such as breathing and heartbeat in the standing position whilst the subject is wearing clothes is a difficult problem. In this paper, we present the basic experimental results to measure small movement of an object behind cloth. We measured acoustic characteristics of various types of cloth to obtain the transmission loss through cloth. To observe the relationship between measurement error and target speed under a low signal-to-noise ratio (SNR), we tried to measure the movement of an object behind cloth. The target was placed apart from the cloth to separate the target reflection from the cloth reflection. We found that a small movement of less than 6 mm/s could be observed using the M-sequence, moving target indicator (MTI) filter, and tracking phase difference, when the SNR was less than 0 dB. We also present the results of theoretical error analysis in the MTI filter and phase tracking for high-accuracy measurement. Characteristics of the systematic error were clarified.

  13. Control of Systems With Slow Actuators Using Time Scale Separation

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vehram; Nguyen, Nhan

    2009-01-01

    This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.

  14. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  15. Low-frequency Target Strength and Abundance of Shoaling Atlantic Herring (Clupea harengus) in the Gulf of Maine during the Ocean Acoustic Waveguide Remote Sensing 2006 Experiment

    DTIC Science & Technology

    2010-01-01

    the northern flank of Georges Bank from east to west. As a result, annual stock estimates may be highly aliased in both time and space. One of the...transmitted signals from the source array for transmission loss and source level calibrations. Two calibrated acoustic targets made of air- filled rubber...region to the north is comprised of over 70106 individuals. Concurrent localized imaging of fish aggregations at OAWRS- directed locations was

  16. Abandoned Uranium Mine (AUM) Points, Navajo Nation, 2016, US EPA Region 9

    EPA Pesticide Factsheets

    This GIS dataset contains point features of all Abandoned Uranium Mines (AUMs) on or within one mile of the Navajo Nation. Points are centroids developed from the Navajo Nation production mines polygon dataset that comprise of productive or unproductive Abandoned Uranium Mines. Attributes include mine names, aliases, links to AUM reports, indicators whether an AUM was mined above or below ground, indicators whether an AUM was mined above or below the local water table, and the region in which an AUM is located. This dataset contains 608 features.

  17. Tailoring the Statistical Experimental Design Process for LVC Experiments

    DTIC Science & Technology

    2011-03-01

    incredibly large test space, it is important to point out that Gray is presenting a simple case to demonstrate the application of an experimental...weapon’s effectiveness. Gray defines k1 = 4 factors in the whole plot and k2 = 3 factors in the sub plot with f1 and f2 as the number of factors...aliased with interaction terms in the whole plot and sub plot respectively. Gray uses the notation 2k1−f1 × 2k2−f2 [?] to represent the fractional

  18. Role of the QBO in Modulating the Influence of the 11 Year Solar Cycle on the Atmosphere Using Constant Forcings

    DTIC Science & Technology

    2010-09-21

    Rolando R. Garcia ,3 Douglas E. Kinnison,3 Fabrizio Sassi,4 and Stacy Walters3 Received 26 August 2009; revised 15 April 2010; accepted 27 April 2010...constant sea surface temperatures, are discussed. Citation: Matthes, K., D. R. Marsh, R. R. Garcia , D. E. Kinnison, F. Sassi, and S. Walters (2010...Smith and Matthes, 2008] or to aliasing effects with tropical SSTs [Austin et al., 2008] and ENSO [Marsh and Garcia , 2007]. Note, however, that ENSO

  19. Distributed Kalman filtering compared to Fourier domain preconditioned conjugate gradient for laser guide star tomography on extremely large telescopes.

    PubMed

    Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent

    2013-05-01

    This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.

  20. An optical systems analysis approach to image resampling

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    1997-01-01

    All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.

  1. Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Radhakrishnan, Senthilkumaran

    2012-01-01

    High-fidelity models of plume-regolith interaction are difficult to develop because of the widely disparate flow conditions that exist in this process. The gas in the core of a rocket plume can often be modeled as a time-dependent, high-temperature, turbulent, reacting continuum flow. However, due to the vacuum conditions on the lunar surface, the mean molecular path in the outer parts of the plume is too long for the continuum assumption to remain valid. Molecular methods are better suited to model this region of the flow. Finally, granular and multiphase flow models must be employed to describe the dust and debris that are displaced from the surface, as well as how a crater is formed in the regolith. At present, standard commercial CFD (computational fluid dynamics) software is not capable of coupling each of these flow regimes to provide an accurate representation of this flow process, necessitating the development of custom software. This software solves the fluid-flow-governing equations in an Eulerian framework, coupled with the particle transport equations that are solved in a Lagrangian framework. It uses a fourth-order explicit Runge-Kutta scheme for temporal integration, an eighth-order central finite differencing scheme for spatial discretization. The non-linear terms in the governing equations are recast in cubic skew symmetric form to reduce aliasing error. The second derivative viscous terms are computed using eighth-order narrow stencils that provide better diffusion for the highest resolved wave numbers. A fourth-order Lagrange interpolation procedure is used to obtain gas-phase variable values at the particle locations.

  2. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  3. Target size matters: target errors contribute to the generalization of implicit visuomotor learning.

    PubMed

    Reichenthal, Maayan; Avraham, Guy; Karniel, Amir; Shmuelof, Lior

    2016-08-01

    The process of sensorimotor adaptation is considered to be driven by errors. While sensory prediction errors, defined as the difference between the planned and the actual movement of the cursor, drive implicit learning processes, target errors (e.g., the distance of the cursor from the target) are thought to drive explicit learning mechanisms. This distinction was mainly studied in the context of arm reaching tasks where the position and the size of the target were constant. We hypothesize that in a dynamic reaching environment, where subjects have to hit moving targets and the targets' dynamic characteristics affect task success, implicit processes will benefit from target errors as well. We examine the effect of target errors on learning of an unnoticed perturbation during unconstrained reaching movements. Subjects played a Pong game, in which they had to hit a moving ball by moving a paddle controlled by their hand. During the game, the movement of the paddle was gradually rotated with respect to the hand, reaching a final rotation of 25°. Subjects were assigned to one of two groups: The high-target error group played the Pong with a small ball, and the low-target error group played with a big ball. Before and after the Pong game, subjects performed open-loop reaching movements toward static targets with no visual feedback. While both groups adapted to the rotation, the postrotation reaching movements were directionally biased only in the small-ball group. This result provides evidence that implicit adaptation is sensitive to target errors. Copyright © 2016 the American Physiological Society.

  4. Probabilistic terrain models from waveform airborne LiDAR: AutoProbaDTM project results

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Goncalves, G. R.

    2012-12-01

    The main objective of the AutoProbaDTM project was to develop new methods for automated probabilistic topographic map production using the latest LiDAR scanners. It included algorithmic development, implementation and validation over a 200 km2 test area in continental Portugal, representing roughly 100 GB of raw data and half a billion waveforms. We aimed to generate digital terrain models automatically, including ground topography as well as uncertainty maps, using Bayesian inference for model estimation and error propagation, and approaches based on image processing. Here we are presenting the results of the completed project (methodological developments and processing results from the test dataset). In June 2011, the test data were acquired in central Portugal, over an area of geomorphological and ecological interest, using a Riegl LMS-Q680i sensor. We managed to survey 70% of the test area at a satisfactory sampling rate, the angular spacing matching the laser beam divergence and the ground spacing nearly equal to the footprint (almost 4 pts/m2 for a 50cm footprint at 1500 m AGL). This is crucial for a correct processing as aliasing artifacts are significantly reduced. A reverse engineering had to be done as the data were delivered in a proprietary binary format, so we were able to read the waveforms and the essential parameters. A robust waveform processing method has been implemented and tested, georeferencing and geometric computations have been coded. Fast gridding and interpolation techniques have been developed. Validation is nearly completed, as well as geometric calibration, IMU error correction, full error propagation and large-scale DEM reconstruction. A probabilistic processing software package has been implemented and code optimization is in progress. This package includes new boresight calibration procedures, robust peak extraction modules, DEM gridding and interpolation methods, and means to visualize the produced uncertain surfaces (topography and accuracy map). Vegetation filtering for bare ground extraction has been left aside, and we wish to explore this research area in the future. A thorough validation of the new techniques and computed models has been conducted, using large numbers of ground control points (GCP) acquired with GPS, evenly distributed and classified according to ground cover and terrain characteristics. More than 16,000 GCP have been acquired during field work. The results are now freely accessible online through a web map service (GeoServer) thus allowing users to visualize data interactively without having to download the full processed dataset.

  5. An evaluation of satellite data for estimating the area of small forestland in the southern lower peninsula of Michigan. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Karteris, M. A. (Principal Investigator)

    1980-01-01

    A winter black and white band 5, a winter color, a fall color, and a diazo color composite of the fall scene were used to assess the use and potential of LANDSAT images for mapping and estimating acreage of small scattered forest tracts in Barry County, Michigan. Forests as small as 2.5 acres were mapped from each LANDSAT data source. The maps for each image were compared with an available forest-type map. Mapping errors detected were categorized as boundary and identification errors. The most frequently misclassified areas were agriculture lands, treed-bogs, brushlands and lowland and mixed hardwood stands. Stocking level affected interpretation more than stand size. The overall level of the interpretation performance was expressed through the estimation of classification, interpretation, and mapping accuracies. These accuracies ranged from 74 between 74% and 98%. Considering errors, accuracy, and cost, winter color imagery is the best LANDSAT alternative for mapping small forest tracts. However, since the availability of cloud-free winter images of the study area is significantly lower than images for other seasons, a diazo enhanced image of a fall scene is recommended as the best next best alternative.

  6. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-07-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  7. Comparison of direct and heterodyne detection optical intersatellite communication links

    NASA Technical Reports Server (NTRS)

    Chen, C. C.; Gardner, C. S.

    1987-01-01

    The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.

  8. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-04-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  9. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  10. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  11. Small step tracking - Implications for the oculomotor 'dead zone'. [eye response failure below threshold target displacements

    NASA Technical Reports Server (NTRS)

    Wyman, D.; Steinman, R. M.

    1973-01-01

    Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.

  12. Relating Regime Structure to Probability Distribution and Preferred Structure of Small Errors in a Large Atmospheric GCM

    NASA Astrophysics Data System (ADS)

    Straus, D. M.

    2007-12-01

    The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  14. Accuracy assessment in the Large Area Crop Inventory Experiment

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.

    1979-01-01

    The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.

  15. Absorbance and fluorometric sensing with capillary wells microplates.

    PubMed

    Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian; Liew, Oi Wah; Ng, Tuck Wah

    2010-12-01

    Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly in the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.

  16. Tailored Codes for Small Quantum Memories

    NASA Astrophysics Data System (ADS)

    Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.

    2017-12-01

    We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.

  17. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    DOE PAGES

    Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.

    2015-01-23

    DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.

  18. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    PubMed

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  19. Phase correction for three-dimensional (3D) diffusion-weighted interleaved EPI using 3D multiplexed sensitivity encoding and reconstruction (3D-MUSER).

    PubMed

    Chang, Hing-Chiu; Hui, Edward S; Chiu, Pui-Wai; Liu, Xiaoxi; Chen, Nan-Kuei

    2018-05-01

    Three-dimensional (3D) multiplexed sensitivity encoding and reconstruction (3D-MUSER) algorithm is proposed to reduce aliasing artifacts and signal corruption caused by inter-shot 3D phase variations in 3D diffusion-weighted echo planar imaging (DW-EPI). 3D-MUSER extends the original framework of multiplexed sensitivity encoding (MUSE) to a hybrid k-space-based reconstruction, thereby enabling the correction of inter-shot 3D phase variations. A 3D single-shot EPI navigator echo was used to measure inter-shot 3D phase variations. The performance of 3D-MUSER was evaluated by analyses of point-spread function (PSF), signal-to-noise ratio (SNR), and artifact levels. The efficacy of phase correction using 3D-MUSER for different slab thicknesses and b-values were investigated. Simulations showed that 3D-MUSER could eliminate artifacts because of through-slab phase variation and reduce noise amplification because of SENSE reconstruction. All aliasing artifacts and signal corruption in 3D interleaved DW-EPI acquired with different slab thicknesses and b-values were reduced by our new algorithm. A near-whole brain single-slab 3D DTI with 1.3-mm isotropic voxel acquired at 1.5T was successfully demonstrated. 3D phase correction for 3D interleaved DW-EPI data is made possible by 3D-MUSER, thereby improving feasible slab thickness and maximum feasible b-value. Magn Reson Med 79:2702-2712, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Low-Cutoff, High-Pass Digital Filtering of Neural Signals

    NASA Technical Reports Server (NTRS)

    Mojarradi,Mohammad; Johnson, Travis; Ortiz, Monico; Cunningham, Thomas; Andersen, Richard

    2004-01-01

    The figure depicts the major functional blocks of a system, now undergoing development, for conditioning neural signals acquired by electrodes implanted in a brain. The overall functions to be performed by this system can be summarized as preamplification, multiplexing, digitization, and high-pass filtering. Other systems under development for recording neural signals typically contain resistor-capacitor analog low-pass filters characterized by cutoff frequencies in the vicinity of 100 Hz. In the application for which this system is being developed, there is a requirement for a cutoff frequency of 5 Hz. Because the resistors needed to obtain such a low cutoff frequency would be impractically large, it was decided to perform low-pass filtering by use of digital rather than analog circuitry. In addition, it was decided to timemultiplex the digitized signals from the multiple input channels into a single stream of data in a single output channel. The signal in each input channel is first processed by a preamplifier having a voltage gain of approximately 50. Embedded in each preamplifier is a low-pass anti-aliasing filter having a cutoff frequency of approximately 10 kHz. The anti-aliasing filters make it possible to couple the outputs of the preamplifiers to the input ports of a multiplexer. The output of the multiplexer is a single stream of time-multiplexed samples of analog signals. This stream is processed by a main differential amplifier, the output of which is sent to an analog-to-digital converter (ADC). The output of the ADC is sent to a digital signal processor (DSP).

  1. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction.

    PubMed

    Yang, Guang; Yu, Simiao; Dong, Hao; Slabaugh, Greg; Dragotti, Pier Luigi; Ye, Xujiong; Liu, Fangde; Arridge, Simon; Keegan, Jennifer; Guo, Yike; Firmin, David; Keegan, Jennifer; Slabaugh, Greg; Arridge, Simon; Ye, Xujiong; Guo, Yike; Yu, Simiao; Liu, Fangde; Firmin, David; Dragotti, Pier Luigi; Yang, Guang; Dong, Hao

    2018-06-01

    Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

  2. New learning based super-resolution: use of DWT and IGMRF prior.

    PubMed

    Gajjar, Prakash P; Joshi, Manjunath V

    2010-05-01

    In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.

  3. GRACE AOD1B Product Release 06: Long-Term Consistency and the Treatment of Atmospheric Tides

    NASA Astrophysics Data System (ADS)

    Dobslaw, Henryk; Bergmann-Wolf, Inga; Dill, Robert; Poropat, Lea; Flechtner, Frank

    2017-04-01

    The GRACE satellites orbiting the Earth at very low altitudes are affected by rapid changes in the Earth's gravity field caused by mass redistribution in atmosphere and oceans. To avoid temporal aliasing of such high-frequency variability into the final monthly-mean gravity fields, those effects are typically modelled during the numerical orbit integration by appling the 6-hourly GRACE Atmosphere and Ocean De-Aliasing Level-1B (AOD1B) a priori model. In preparation of the next GRACE gravity field re-processing currently performed by the GRACE Science Data System, a new version of AOD1B has been calculated. The data-set is based on 3-hourly surface pressure anomalies from ECMWF that have been mapped to a common reference orography by means of ECMWF's mean sea-level pressure diagnostic. Atmospheric tides as well as the corresponding oceanic response at the S1, S2, S3, and L2 frequencies and its annual modulations have been fitted and removed in order to retain the non-tidal variability only. The data-set is expanded into spherical harmonics complete up to degree and order 180. In this contribution, we will demonstrate that AOD1B RL06 is now free from spurious jumps in the time-series related to occasional changes in ECMWF's operational numerical weather prediction system. We will also highlight the rationale for separating tidal signals from the AOD1B coefficients, and will finally discuss the current quality of the AOD1B forecasts that have been introduced very recently for GRACE quicklook or near-realtime applications.

  4. Space and time aliasing structure is monthly mean polar-orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Zeng, Lixin; Levy, Gad

    1995-01-01

    Monthly mean wind fields from the European Remote Sensing Satellite (ERS1) scatterometer are presented. A banded structure which resembles the satellite subtrack is clearly and consistently apparent in the isotachs as well as the u and v components of the routinely produced fields. The structure also appears in the means of data from other polar-orbiting satellites and instruments. An experiment is designed to trace the cause of the banded structure. The European Centre for Medium-Range Weather Forecast (ECMWF) gridded surface wind analyses are used as a control set. These analyses are also sampled with the ERS1 temporal-spatial samplig pattern to form a simulated scatterometer wind set. Both sets are used to create monthly averages. The banded structures appear in the monthly mean simulated data but do not appear in the control set. It is concluded that the source of the banded structure lies in the spatial and temporal sampling of the polar-orbiting satellite which results in undersampling. The problem involves multiple timescales and space scales, oversampling and under-sampling in space, aliasing in the time and space domains, and preferentially sampled variability. It is shown that commonly used spatial smoothers (or filters), while producing visually pleasing results, also significantly bias the true mean. A three-dimensional spatial-temporal interpolator is designed and used to determine the mean field. It is found to produce satisfactory monthly means from both simulated and real ERS1 data. The implications to climate studies involving polar-orbiting satellite data are discussed.

  5. Dating a tropical ice core by time-frequency analysis of ion concentration depth profiles

    NASA Astrophysics Data System (ADS)

    Gay, M.; De Angelis, M.; Lacoume, J.-L.

    2014-09-01

    Ice core dating is a key parameter for the interpretation of the ice archives. However, the relationship between ice depth and ice age generally cannot be easily established and requires the combination of numerous investigations and/or modelling efforts. This paper presents a new approach to ice core dating based on time-frequency analysis of chemical profiles at a site where seasonal patterns may be significantly distorted by sporadic events of regional importance, specifically at the summit area of Nevado Illimani (6350 m a.s.l.), located in the eastern Bolivian Andes (16°37' S, 67°46' W). We used ion concentration depth profiles collected along a 100 m deep ice core. The results of Fourier time-frequency and wavelet transforms were first compared. Both methods were applied to a nitrate concentration depth profile. The resulting chronologies were checked by comparison with the multi-proxy year-by-year dating published by de Angelis et al. (2003) and with volcanic tie points. With this first experiment, we demonstrated the efficiency of Fourier time-frequency analysis when tracking the nitrate natural variability. In addition, we were able to show spectrum aliasing due to under-sampling below 70 m. In this article, we propose a method of de-aliasing which significantly improves the core dating in comparison with annual layer manual counting. Fourier time-frequency analysis was applied to concentration depth profiles of seven other ions, providing information on the suitability of each of them for the dating of tropical Andean ice cores.

  6. Developing stochastic model of thrust and flight dynamics for small UAVs

    NASA Astrophysics Data System (ADS)

    Tjhai, Chandra

    This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.

  7. A one-dimensional model of flow in a junction of thin channels, including arterial trees

    NASA Astrophysics Data System (ADS)

    Kozlov, V. A.; Nazarov, S. A.

    2017-08-01

    We study a Stokes flow in a junction of thin channels (of diameter O(h)) for fixed flows of the fluid at the inlet cross-sections and fixed peripheral pressure at the outlet cross-sections. On the basis of the idea of the pressure drop matrix, apart from Neumann conditions (fixed flow) and Dirichlet conditions (fixed pressure) at the outer vertices, the ordinary one-dimensional Reynolds equations on the edges of the graph are equipped with transmission conditions containing a small parameter h at the inner vertices, which are transformed into the classical Kirchhoff conditions as h\\to+0. We establish that the pre-limit transmission conditions ensure an exponentially small error O(e-ρ/h), ρ>0, in the calculation of the three-dimensional solution, but the Kirchhoff conditions only give polynomially small error. For the arterial tree, under the assumption that the walls of the blood vessels are rigid, for every bifurcation node a ( 2×2)-pressure drop matrix appears, and its influence on the transmission conditions is taken into account by means of small variations of the lengths of the graph and by introducing effective lengths of the one-dimensional description of blood vessels whilst keeping the Kirchhoff conditions and exponentially small approximation errors. We discuss concrete forms of arterial bifurcation and available generalizations of the results, in particular, the Navier-Stokes system of equations. Bibliography: 59 titles.

  8. "Napouléon's" Sequential Heritage. Using a Student Error as a Resource for Learning and Teaching Pronunciation in the French Foreign Language Classroom

    ERIC Educational Resources Information Center

    Broth, Mathias; Lundell, Fanny Forsberg

    2013-01-01

    In this paper, we consider a student error produced in a French foreign language small-group seminar, involving four Swedish L1 first-term university students of French and a native French teacher. The error in question consists of a mispronunciation of the second vowel of the name "Napoléon" in the midst of a student presentation on the…

  9. An experimental system for the study of active vibration control - Development and modeling

    NASA Astrophysics Data System (ADS)

    Batta, George R.; Chen, Anning

    A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.

  10. IRIS Mariner 9 Data Revisited. 1; An Instrumental Effect

    NASA Technical Reports Server (NTRS)

    Formisano, V.; Grassi, D.; Piccioni, G.; Pearl, John; Bjoraker, G.; Conrath, B.; Hanel, R.

    1999-01-01

    Small spurious features are present in data from the Mariner 9 Infrared Interferometer Spectrometer (IRIS). These represent a low amplitude replication of the spectrum with a doubled wavenumber scale. This replication arises principally from an internal reflection of the interferogram at the input window. An algorithm is provided to correct for the effect, which is at the 2% level. We believe that the small error in the uncorrected spectra does not materially affect previous results; however, it may be significant for some future studies at short wavelengths. The IRIS spectra are also affected by a coding error in the original calibration that results in only positive radiances. This reduces the effectiveness of averaging spectra to improve the signal to noise ratio at small signal levels.

  11. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  12. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  13. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon

    1998-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.

  14. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  15. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  16. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  17. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  18. TED: A Tolerant Edit Distance for segmentation evaluation.

    PubMed

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  19. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  20. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    NASA Astrophysics Data System (ADS)

    Colferai, D.; Niccoli, A.

    2015-04-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  1. Weighing Rocky Exoplanets with Improved Radial Velocimetry

    NASA Astrophysics Data System (ADS)

    Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium

    2016-01-01

    The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.

  2. Effect of Random Circuit Fabrication Errors on Small Signal Gain and Phase in Helix Traveling Wave Tubes

    NASA Astrophysics Data System (ADS)

    Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.

    2007-11-01

    Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.

  3. Absorbance and fluorometric sensing with capillary wells microplates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian

    2010-12-15

    Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly inmore » the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.« less

  4. A radiation tolerant Data link board for the ATLAS Tile Cal upgrade

    NASA Astrophysics Data System (ADS)

    Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.

    2016-01-01

    This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.

  5. Array coding for large data memories

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.

    1982-01-01

    It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.

  6. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  7. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  8. A revised 5 minute gravimetric geoid and associated errors for the North Atlantic calibration area

    NASA Technical Reports Server (NTRS)

    Mader, G. L.

    1979-01-01

    A revised 5 minute gravimetric geoid and its errors were computed for the North Atlantic calibration area using GEM-8 potential coefficients and the latest gravity data available from the Defense Mapping Agency. This effort was prompted by a number of inconsistencies and small errors found in previous calculations of this geoid. The computational method and constants used are given in detail to serve as a reference for future work.

  9. The impact of scatterometer wind data on global weather forecasting

    NASA Technical Reports Server (NTRS)

    Atlas, D.; Baker, W. E.; Kalnay, E.; Halem, M.; Woiceshyn, P. M.; Peteherych, S.

    1984-01-01

    The impact of SEASAT-A scatterometer (SASS) winds on coarse resolution atmospheric model forecasts was assessed. The scatterometer provides high resolution winds, but each wind can have up to four possible directions. One wind direction is correct; the remainder are ambiguous or "aliases'. In general, the effect of objectively dealiased-SASS data was found to be negligible in the Northern Hemisphere. In the Southern Hemisphere, the impact was larger and primarily beneficial when vertical temperature profile radiometer (VTPR) data was excluded. However, the inclusion of VTPR data eliminates the positive impact, indicating some redundancy between the two data sets.

  10. Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.

    PubMed

    Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P

    2015-02-20

    Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.

  11. An acoustic filter based on layered structure

    PubMed Central

    Steer, Michael B.

    2015-01-01

    Acoustic filters (AFs) are key components to control wave propagation in multi-frequency systems. We present a design which selectively achieves acoustic filtering with a stop band and passive amplification at the high- and low-frequencies, respectively. Measurement results from the prototypes closely match the design predictions. The AF suppresses the high frequency aliasing echo by 14.5 dB and amplifies the low frequency transmission by 8.0 dB, increasing an axial resolution from 416 to 86 μm in imaging. The AF design approach is proved to be effective in multi-frequency systems. PMID:25829548

  12. Precise and rapid isotopomic analysis by (1)H-(13)C 2D NMR: Application to triacylglycerol matrices.

    PubMed

    Merchak, Noelle; Silvestre, Virginie; Rouger, Laetitia; Giraudeau, Patrick; Rizk, Toufic; Bejjani, Joseph; Akoka, Serge

    2016-08-15

    An optimized HSQC sequence was tested and applied to triacylglycerol matrices to determine their isotopic and metabolomic profiles. Spectral aliasing and non-uniform sampling approaches were used to decrease the experimental time and to improve the resolution, respectively. An excellent long-term repeatability of signal integrals was achieved enabling to perform isotopic measurements. Thirty-two commercial vegetable oils were analyzed by this methodology. The results show that this method can be used to classify oil samples according to their geographical and botanical origins. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Development of a methodology for classifying software errors

    NASA Technical Reports Server (NTRS)

    Gerhart, S. L.

    1976-01-01

    A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.

  14. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  15. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  17. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  18. Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology

    PubMed Central

    van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin

    2012-01-01

    Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030

  19. Translating Research Into Practice: Voluntary Reporting of Medication Errors in Critical Access Hospitals

    ERIC Educational Resources Information Center

    Jones, Katherine J.; Cochran, Gary; Hicks, Rodney W.; Mueller, Keith J.

    2004-01-01

    Context:Low service volume, insufficient information technology, and limited human resources are barriers to learning about and correcting system failures in small rural hospitals. This paper describes the implementation of and initial findings from a voluntary medication error reporting program developed by the Nebraska Center for Rural Health…

  20. The Effects of Observation Errors on the Attack Vulnerability of Complex Networks

    DTIC Science & Technology

    2012-11-01

    more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I

  1. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  2. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  3. Dichrometer errors resulting from large signals or improper modulator phasing.

    PubMed

    Sutherland, John C

    2012-09-01

    A single-beam spectrometer equipped with a photoelastic modulator can be configured to measure a number of different parameters useful in characterizing chemical and biochemical materials including natural and magnetic circular dichroism, linear dichroism, natural and magnetic fluorescence-detected circular dichroism, and fluorescence polarization anisotropy as well as total absorption and fluorescence. The derivations of the mathematical expressions used to extract these parameters from ultraviolet, visible, and near-infrared light-induced electronic signals in a dichrometer assume that the dichroic signals are sufficiently small that certain mathematical approximations will not introduce significant errors. This article quantifies errors resulting from these assumptions as a function of the magnitude of the dichroic signals. In the case of linear dichroism, improper modulator programming can result in errors greater than those resulting from the assumption of small signal size, whereas for fluorescence polarization anisotropy, improper modulator phase alone gives incorrect results. Modulator phase can also impact the values of total absorbance recorded simultaneously with linear dichroism and total fluorescence. Copyright © 2012 Wiley Periodicals, Inc., A Wiley Company.

  4. Windprofiler optimization using digital deconvolution procedures

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.

  5. Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.

    PubMed

    Raposo, Carolina; Antunes, Michel; P Barreto, Joao

    2017-08-09

    The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.

  6. Impacts of motivational valence on the error-related negativity elicited by full and partial errors.

    PubMed

    Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki

    2016-02-01

    Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  8. Errors in Measuring Water Potentials of Small Samples Resulting from Water Adsorption by Thermocouple Psychrometer Chambers 1

    PubMed Central

    Bennett, Jerry M.; Cortes, Peter M.

    1985-01-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367

  9. Errors in measuring water potentials of small samples resulting from water adsorption by thermocouple psychrometer chambers.

    PubMed

    Bennett, J M; Cortes, P M

    1985-09-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  11. Linear quadratic Gaussian and feedforward controllers for the DSS-13 antenna

    NASA Technical Reports Server (NTRS)

    Gawronski, W. K.; Racho, C. S.; Mellstrom, J. A.

    1994-01-01

    The controller development and the tracking performance evaluation for the DSS-13 antenna are presented. A trajectory preprocessor, linear quadratic Gaussian (LQG) controller, feedforward controller, and their combination were designed, built, analyzed, and tested. The antenna exhibits nonlinear behavior when the input to the antenna and/or the derivative of this input exceeds the imposed limits; for slewing and acquisition commands, these limits are typically violated. A trajectory preprocessor was designed to ensure that the antenna behaves linearly, just to prevent nonlinear limit cycling. The estimator model for the LQG controller was identified from the data obtained from the field test. Based on an LQG balanced representation, a reduced-order LQG controller was obtained. The feedforward controller and the combination of the LQG and feedforward controller were also investigated. The performance of the controllers was evaluated with the tracking errors (due to following a trajectory) and the disturbance errors (due to the disturbances acting on the antenna). The LQG controller has good disturbance rejection properties and satisfactory tracking errors. The feedforward controller has small tracking errors but poor disturbance rejection properties. The combined LQG and feedforward controller exhibits small tracking errors as well as good disturbance rejection properties. However, the cost for this performance is the complexity of the controller.

  12. SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.

    PubMed

    Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S

    2012-06-01

    With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.

  13. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  14. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  15. Deformation Estimation In Non-Urban Areas Exploiting High Resolution SAR Data

    NASA Astrophysics Data System (ADS)

    Goel, Kanika; Adam, Nico

    2012-01-01

    Advanced techniques such as the Small Baseline Subset Algorithm (SBAS) have been developed for terrain motion mapping in non-urban areas with a focus on extracting information from distributed scatterers (DSs). SBAS uses small baseline differential interferograms (to limit the effects of geometric decorrelation) and these are typically multilooked to reduce phase noise, resulting in loss of resolution. Various error sources e.g. phase unwrapping errors, topographic errors, temporal decorrelation and atmospheric effects also affect the interferometric phase. The aim of our work is an improved deformation monitoring in non-urban areas exploiting high resolution SAR data. The paper provides technical details and a processing example of a newly developed technique which incorporates an adaptive spatial phase filtering algorithm for an accurate high resolution differential interferometric stacking, followed by deformation retrieval via the SBAS approach where we perform the phase inversion using a more robust L1 norm minimization.

  16. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    NASA Astrophysics Data System (ADS)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  17. Spectral resolution enhancement of Fourier-transform spectrometer based on orthogonal shear interference using Wollaston prism

    NASA Astrophysics Data System (ADS)

    Cong, Lin-xiao; Huang, Min; Cai, Qi-sheng

    2017-10-01

    In this paper, a multi-line interferogram stitching method based on orthogonal shear using the Wollaston prism(WP) was proposed with a 2D projection interferogram recorded through the rotation of CCD, making the spectral resolution of Fourier-Transform spectrometer(FTS) of a limited spatial size increase by at least three times. The fringes on multi-lines were linked with the pixels of equal optical path difference (OPD). Ideally, the error of sampled phase within one pixel was less than half the wavelength, ensuring consecutive values in the over-sampled dimension while aliasing in another. In the simulation, with the calibration of 1.064μm, spectral lines at 1.31μm and 1.56μm of equal intensity were tested and observed. The result showed a bias of 0.13% at 1.31μm and 1.15% at 1.56μm in amplitude, and the FWHM at 1.31μm reduced from 25nm to 8nm after the sample points increased from 320 to 960. In the comparison of reflectance spectrum of carnauba wax within near infrared(NIR) band, the absorption peak at 1.2μm was more obvious and zoom of the band 1.38 1.43μm closer to the reference, although some fluctuation was in the short-wavelength region arousing the spectral crosstalk. In conclusion, with orthogonal shear based on the rotation of the CCD relative to the axis of WP, the spectral resolution of static FTS was enhanced by the projection of fringes to the grid coordinates and stitching the interferograms into a larger OPD, which showed the advantages of cost and miniaturization in the space-constrained NIR applications.

  18. GRAPPA reconstructed wave-CAIPI MP-RAGE at 7 Tesla.

    PubMed

    Schwarz, Jolanda M; Pracht, Eberhard D; Brenner, Daniel; Reuter, Martin; Stöcker, Tony

    2018-04-16

    The aim of this project was to develop a GRAPPA-based reconstruction for wave-CAIPI data. Wave-CAIPI fully exploits the 3D coil sensitivity variations by combining corkscrew k-space trajectories with CAIPIRINHA sampling. It reduces artifacts and limits reconstruction induced spatially varying noise enhancement. The GRAPPA-based wave-CAIPI method is robust and does not depend on the accuracy of coil sensitivity estimations. We developed a GRAPPA-based, noniterative wave-CAIPI reconstruction algorithm utilizing multiple GRAPPA kernels. For data acquisition, we implemented a fast 3D magnetization-prepared rapid gradient-echo wave-CAIPI sequence tailored for ultra-high field application. The imaging results were evaluated by comparing the g-factor and the root mean square error to Cartesian CAIPIRINHA acquisitions. Additionally, to assess the performance of subcortical segmentations (calculated by FreeSurfer), the data were analyzed across five subjects. Sixteen-fold accelerated whole brain magnetization-prepared rapid gradient-echo data (1 mm isotropic resolution) were acquired in 40 seconds at 7T. A clear improvement in image quality compared to Cartesian CAIPIRINHA sampling was observed. For the chosen imaging protocol, the results of 16-fold accelerated wave-CAIPI acquisitions were comparable to results of 12-fold accelerated Cartesian CAIPIRINHA. In comparison to the originally proposed SENSitivity Encoding reconstruction of Wave-CAIPI data, the GRAPPA approach provided similar image quality. High-quality, wave-CAIPI magnetization-prepared rapid gradient-echo images can be reconstructed by means of a GRAPPA-based reconstruction algorithm. Even for high acceleration factors, the noniterative reconstruction is robust and does not require coil sensitivity estimations. By altering the aliasing pattern, ultra-fast whole-brain structural imaging becomes feasible. © 2018 International Society for Magnetic Resonance in Medicine.

  19. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  20. Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.

    2018-02-01

    The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.

  1. Re-assessing Present Day Global Mass Transport and Glacial Isostatic Adjustment From a Data Driven Approach

    NASA Astrophysics Data System (ADS)

    Wu, X.; Jiang, Y.; Simonsen, S.; van den Broeke, M. R.; Ligtenberg, S.; Kuipers Munneke, P.; van der Wal, W.; Vermeersen, B. L. A.

    2017-12-01

    Determining present-day mass transport (PDMT) is complicated by the fact that most observations contain signals from both present day ice melting and Glacial Isostatic Adjustment (GIA). Despite decades of progress in geodynamic modeling and new observations, significant uncertainties remain in both. The key to separate present-day ice mass change and signals from GIA is to include data of different physical characteristics. We designed an approach to separate PDMT and GIA signatures by estimating them simultaneously using globally distributed interdisciplinary data with distinct physical information and a dynamically constructed a priori GIA model. We conducted a high-resolution global reappraisal of present-day ice mass balance with focus on Earth's polar regions and its contribution to global sea-level rise using a combination of ICESat, GRACE gravity, surface geodetic velocity data, and an ocean bottom pressure model. Adding ice altimetry supplies critically needed dual data types over the interiors of ice covered regions to enhance separation of PDMT and GIA signatures, and achieve half an order of magnitude expected higher accuracies for GIA and consequently ice mass balance estimates. The global data based approach can adequately address issues of PDMT and GIA induced geocenter motion and long-wavelength signatures important for large areas such as Antarctica and global mean sea level. In conjunction with the dense altimetry data, we solved for PDMT coefficients up to degree and order 180 by using a higher-resolution GRACE data set, and a high-resolution a priori PDMT model that includes detailed geographic boundaries. The high-resolution approach solves the problem of multiple resolutions in various data types, greatly reduces aliased errors from a low-degree truncation, and at the same time, enhances separation of signatures from adjacent regions such as Greenland and Canadian Arctic territories.

  2. Using the theory of small perturbations in performance calculations of the RBMK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, N.V.; Druzhinin, V.E.; Pogosbekyan, L.R.

    The theory of small perturbations in reactor physics is discussed and applied to two-dimensional calculations of the RBMK. The classical theory of small perturbations implies considerable errors in calculations because the perturbations cannot be considered small. The modified theory of small perturbations presented here can be used in atomic power stations for determining reactivity effects and reloading rates of channels in reactors and also for assessing the reactivity storage in control rods.

  3. Poster - Thurs Eve-12: A needle-positioning robot co-registered with volumetric x-ray micro-computed tomography images for minimally-invasive small-animal interventions.

    PubMed

    Waspe, A C; Holdsworth, D W; Lacefield, J C; Fenster, A

    2008-07-01

    Preclinical research protocols often require the delivery of biological substances to specific targets in small animal disease models. To target biologically relevant locations in mice accurately, the needle positioning error needs to be < 200 μm. If targeting is inaccurate, experimental results can be inconclusive or misleading. We have developed a robotic manipulator that is capable of positioning a needle with a mean error < 100 μm. An apparatus and method were developed for integrating the needle-positioning robot with volumetric micro-computed tomography image guidance for interventions in small animals. Accurate image-to-robot registration is critical for integration as it enables targets identified in the image to be mapped to physical coordinates inside the animal. Registration is accomplished by injecting barium sulphate into needle tracks as the robot withdraws the needle from target points in a tissue-mimicking phantom. Registration accuracy is therefore affected by the positioning error of the robot and is assessed by measuring the point-to-line fiducial and target registration errors (FRE, TRE). Centroid points along cross-sectional slices of the track are determined using region growing segmentation followed by application of a center-of-mass algorithm. The centerline points are registered to needle trajectories in robot coordinates by applying an iterative closest point algorithm between points and lines. Implementing this procedure with four fiducial needle tracks produced a point-to-line FRE and TRE of 246 ± 58 μm and 194 ± 18 μm, respectively. The proposed registration technique produced a TRE < 200 μm, in the presence of robot positioning error, meeting design specification. © 2008 American Association of Physicists in Medicine.

  4. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  5. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  6. Impact of random pointing and tracking errors on the design of coherent and incoherent optical intersatellite communication links

    NASA Technical Reports Server (NTRS)

    Chen, Chien-Chung; Gardner, Chester S.

    1989-01-01

    Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.

  7. Analysis of frequency mixing error on heterodyne interferometric ellipsometry

    NASA Astrophysics Data System (ADS)

    Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan

    2007-11-01

    A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.

  8. On the Retrieval of Geocenter Motion from Gravity Data

    NASA Astrophysics Data System (ADS)

    Rosat, S.; Mémin, A.; Boy, J. P.; Rogister, Y. J. G.

    2017-12-01

    The center of mass of the whole Earth, the so-called geocenter, is moving with respect to the Center of Mass of the solid Earth because of the loading exerted by the Earth's fluid layers on the solid crust. Space geodetic techniques tying satellites and ground stations (e.g. GNSS, SLR and DORIS) have been widely employed to estimate the geocenter motion. Harmonic degree-1 variations of the gravity field are associated to the geocenter displacement. We show that ground records of time-varying gravity from Superconducting Gravimeters (SGs) can be used to constrain the geocenter motion. Two major difficulties have to be tackled: (1) the sensitivity of surface gravimetric measurements to local mass changes, and in particular hydrological and atmospheric variabilities; (2) the spatial aliasing (spectral leakage) of spherical harmonic degrees higher than 1 induced by the under-sampling of station distribution. The largest gravity variations can be removed from the SG data by subtracting solid and oceanic tides as well as atmospheric and hydrologic effects using global models. However some hydrological signal may still remain. Since surface water content is well-modelled using GRACE observations, we investigate how the spatial aliasing in SG data can be reduced by employing GRACE solutions when retrieving geocenter motion. We show synthetic simulations using complete surface loading models together with GRACE solutions computed at SG stations. In order to retrieve the degree-one gravity variations that are associated with the geocenter motion, we use a multi-station stacking method that performs better than a classical spherical harmonic stacking when the station distribution is inhomogeneous. We also test the influence of the network configuration on the estimate of the geocenter motion. An inversion using SG and GRACE observations is finally presented and the results are compared with previous geocenter estimates.

  9. Controlling the numerical Cerenkov instability in PIC simulations using a customized finite difference Maxwell solver and a local FFT based current correction

    DOE PAGES

    Li, Fei; Yu, Peicheng; Xu, Xinlu; ...

    2017-01-12

    In this study we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1ˆ direction). We show that this eliminates the main NCI modes with moderate |k 1|, while keepsmore » additional main NCI modes well outside the range of physical interest with higher |k 1|. These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1ˆ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss’ Law is satisfied. Lastly, we present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.« less

  10. Daily estimates of the migrating tide and zonal mean temperature in the mesosphere and lower thermosphere derived from SABER data

    NASA Astrophysics Data System (ADS)

    Ortland, David A.

    2017-04-01

    Satellites provide a global view of the structure in the fields that they measure. In the mesosphere and lower thermosphere, the dominant features in these fields at low zonal wave number are contained in the zonal mean, quasi-stationary planetary waves, and tide components. Due to the nature of the satellite sampling pattern, stationary, diurnal, and semidiurnal components are aliased and spectral methods are typically unable to separate the aliased waves over short time periods. This paper presents a data processing scheme that is able to recover the daily structure of these waves and the zonal mean state. The method is validated by using simulated data constructed from a mechanistic model, and then applied to Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperature measurements. The migrating diurnal tide extracted from SABER temperatures for 2009 has a seasonal variability with peak amplitude (20 K at 95 km) in February and March and minimum amplitude (less than 5 K at 95 km) in early June and early December. Higher frequency variability includes a change in vertical structure and amplitude during the major stratospheric warming in January. The migrating semidiurnal tide extracted from SABER has variability on a monthly time scale during January through March, minimum amplitude in April, and largest steady amplitudes from May through September. Modeling experiments were performed that show that much of the variability on seasonal time scales in the migrating tides is due to changes in the mean flow structure and the superposition of the tidal responses to water vapor heating in the troposphere and ozone heating in the stratosphere and lower mesosphere.

  11. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm

    PubMed Central

    Brayfield, Brad P.

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  12. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  13. Controlling the numerical Cerenkov instability in PIC simulations using a customized finite difference Maxwell solver and a local FFT based current correction

    NASA Astrophysics Data System (ADS)

    Li, Fei; Yu, Peicheng; Xu, Xinlu; Fiuza, Frederico; Decyk, Viktor K.; Dalichaouch, Thamine; Davidson, Asher; Tableman, Adam; An, Weiming; Tsung, Frank S.; Fonseca, Ricardo A.; Lu, Wei; Mori, Warren B.

    2017-05-01

    In this paper we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1 ˆ direction). We show that this eliminates the main NCI modes with moderate |k1 | , while keeps additional main NCI modes well outside the range of physical interest with higher |k1 | . These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1 ˆ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss' Law is satisfied. We present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.

  14. Destroying Aliases from the Ground and Space: Super-Nyquist ZZ Cetis in K2 Long Cadence Data

    NASA Astrophysics Data System (ADS)

    Bell, Keaton J.; Hermes, J. J.; Vanderbosch, Z.; Montgomery, M. H.; Winget, D. E.; Dennihy, E.; Fuchs, J. T.; Tremblay, P.-E.

    2017-12-01

    With typical periods of the order of 10 minutes, the pulsation signatures of ZZ Ceti variables (pulsating hydrogen-atmosphere white dwarf stars) are severely undersampled by long-cadence (29.42 minutes per exposure) K2 observations. Nyquist aliasing renders the intrinsic frequencies ambiguous, stifling precision asteroseismology. We report the discovery of two new ZZ Cetis in long-cadence K2 data: EPIC 210377280 and EPIC 220274129. Guided by three to four nights of follow-up, high-speed (≤slant 30 s) photometry from the McDonald Observatory, we recover accurate pulsation frequencies for K2 signals that reflected four to five times off the Nyquist with the full precision of over 70 days of monitoring (∼0.01 μHz). In turn, the K2 observations enable us to select the correct peaks from the alias structure of the ground-based signals caused by gaps in the observations. We identify at least seven independent pulsation modes in the light curves of each of these stars. For EPIC 220274129, we detect three complete sets of rotationally split {\\ell }=1 (dipole mode) triplets, which we use to asteroseismically infer the stellar rotation period of 12.7 ± 1.3 hr. We also detect two sub-Nyquist K2 signals that are likely combination (difference) frequencies. We attribute our inability to match some of the K2 signals to the ground-based data to changes in pulsation amplitudes between epochs of observation. Model fits to SOAR spectroscopy place both EPIC 210377280 and EPIC 220274129 near the middle of the ZZ Ceti instability strip, with {T}{eff} =11590+/- 200 K and 11810 ± 210 K, and masses 0.57 ± 0.03 M ⊙ and 0.62 ± 0.03 M ⊙, respectively.

  15. Controlling the numerical Cerenkov instability in PIC simulations using a customized finite difference Maxwell solver and a local FFT based current correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fei; Yu, Peicheng; Xu, Xinlu

    In this study we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1ˆ direction). We show that this eliminates the main NCI modes with moderate |k 1|, while keepsmore » additional main NCI modes well outside the range of physical interest with higher |k 1|. These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1ˆ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss’ Law is satisfied. Lastly, we present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.« less

  16. Observational filter for limb sounders applied to convective gravity waves

    NASA Astrophysics Data System (ADS)

    Trinh, Quang Thai; Preusse, Peter; Riese, Martin; Kalisch, Silvio

    Gravity waves (GWs) play a key role in the dynamics of the middle atmosphere. In the current work, simulated spectral distribution in term of horizontal and vertical wavenumber of GW momentum flux (GWMF) is analysed by applying an accurate observational filter, which consider sensitivity and sampling geometry of satellite instruments. For this purpose, GWs are simulated for January 2008 by coupling GROGRAT (gravity wave regional or global ray tracer) and ray-based spectral parameterization of convective gravity wave drag (CGWD). Atmospheric background is taken from MERRA (Modern-Era Retrospective Analysis For Research And Applications) data. GW spectra of different spatial and temporal scales from parameterization of CGWD (MF1, MF2, MF3) at 25 km altitude are considered. The observational filter contains the following elements: determination of the wavelength along the line of sight, application of the visibility filter from Preusse et al, JGR, 2002, determination of the along-track wavelength, and aliasing correction as well as correction of GWMF due to larger horizontal wavelength along-track. Sensitivity and sampling geometries of the SABER (Sounding of the Atmosphere using Broadband Emission Radiometry) and HIRDLS (High Resolution Dynamics Limb Sounder) are simulated. Results show that all spectra are shifted to the direction of longer horizontal and vertical wavelength after applying the observational filter. Spectrum MF1 is most influenced and MF3 is least influenced by this filter. Part of the spectra, related to short horizontal wavelength, is cut off and flipped to the part of longer horizontal wavelength by aliasing. Sampling geometry of HIRDLS allows to see a larger part of the spectrum thanks to shorter sampling profile distance. A better vertical resolution of the HIRDLS instrument also helps to increase its sensitivity.

  17. Observational filter for limb sounders applied to convective gravity waves

    NASA Astrophysics Data System (ADS)

    Trinh, Thai; Kalisch, Silvio; Preusse, Peter; Riese, Martin

    2014-05-01

    Gravity waves (GWs) play a key role in the dynamics of the middle atmosphere. In the current work, simulated spectral distribution in term of horizontal and vertical wavenumber of GW momentum flux (GWMF) is analysed by applying an accurate observational filter, which consider sensitivity and sampling geometry of satellite instruments. For this purpose, GWs are simulated for January 2008 by coupling GROGRAT (gravity wave regional or global ray tracer) and ray-based spectral parameterization of convective gravity wave drag (CGWD). Atmospheric background is taken from MERRA (Modern-Era Retrospective Analysis For Research And Applications) data. GW spectra of different spatial and temporal scales from parameterization of CGWD (MF1, MF2, MF3) at 25 km altitude are considered. The observational filter contains the following elements: determination of the wavelength along the line of sight, application of the visibility filter from Preusse et al, JGR, 2002, determination of the along-track wavelength, and aliasing correction as well as correction of GWMF due to larger horizontal wavelength along-track. Sensitivity and sampling geometries of the SABER (Sounding of the Atmosphere using Broadband Emission Radiometry) and HIRDLS (High Resolution Dynamics Limb Sounder) are simulated. Results show that all spectra are shifted to the direction of longer horizontal and vertical wavelength after applying the observational filter. Spectrum MF1 is most influenced and MF3 is least influenced by this filter. Part of the spectra, related to short horizontal wavelength, is cut off and flipped to the part of longer horizontal wavelength by aliasing. Sampling geometry of HIRDLS allows to see a larger part of the spectrum thanks to shorter sampling profile distance. A better vertical resolution of the HIRDLS instrument also helps to increase its sensitivity.

  18. Sensitivity to prediction error in reach adaptation

    PubMed Central

    Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza

    2012-01-01

    It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782

  19. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  20. Addressing Common Student Errors with Classroom Voting in Multivariable Calculus

    ERIC Educational Resources Information Center

    Cline, Kelly; Parker, Mark; Zullo, Holly; Stewart, Ann

    2012-01-01

    One technique for identifying and addressing common student errors is the method of classroom voting, in which the instructor presents a multiple-choice question to the class, and after a few minutes for consideration and small group discussion, each student votes on the correct answer, often using a hand-held electronic clicker. If a large number…

Top