Sample records for temporal aliasing errors

  1. Probing the Spatio-Temporal Characteristics of Temporal Aliasing Errors and their Impact on Satellite Gravity Retrievals

    NASA Astrophysics Data System (ADS)

    Wiese, D. N.; McCullough, C. M.

    2017-12-01

    Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.

  2. Treatment of temporal aliasing effects in the context of next generation satellite gravimetry missions

    NASA Astrophysics Data System (ADS)

    Daras, Ilias; Pail, Roland

    2017-09-01

    Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.

  3. Treatment of ocean tide aliasing in the context of a next generation gravity field mission

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Pail, Roland

    2018-07-01

    Current temporal gravity field solutions from Gravity Recovery and Climate Experiment (GRACE) suffer from temporal aliasing errors due to undersampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean) and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high-resolution temporal gravity fields from future gravity missions such as GRACE Follow-On and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parametrize ocean tide parameters of the eight main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from 1 to 3 yr leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.

  4. Treatment of ocean tide aliasing in the context of a next generation gravity field mission

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Pail, Roland

    2018-04-01

    Current temporal gravity field solutions from GRACE suffer from temporal aliasing errors due to under-sampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean), and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high resolution temporal gravity fields from future gravity missions such as GRACE Follow-on and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parameterize ocean tide parameters of the 8 main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from one to three years leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 per cent and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.

  5. Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael

    2017-04-01

    The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.

  6. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  7. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  8. Constellations of Next Generation Gravity Missions: Simulations regarding optimal orbits and mitigation of aliasing errors

    NASA Astrophysics Data System (ADS)

    Hauk, M.; Pail, R.; Gruber, T.; Purkhauser, A.

    2017-12-01

    The CHAMP and GRACE missions have demonstrated the tremendous potential for observing mass changes in the Earth system from space. In order to fulfil future user needs a monitoring of mass distribution and mass transport with higher spatial and temporal resolution is required. This can be achieved by a Bender-type Next Generation Gravity Mission (NGGM) consisting of a constellation of satellite pairs flying in (near-)polar and inclined orbits, respectively. For these satellite pairs the observation concept of the GRACE Follow-on mission with a laser-based low-low satellite-to-satellite tracking (ll-SST) system and more precise accelerometers and state-of-the-art star trackers is adopted. By choosing optimal orbit constellations for these satellite pairs high frequency mass variations will be observable and temporal aliasing errors from under-sampling will not be the limiting factor anymore. As part of the European Space Agency (ESA) study "ADDCON" (ADDitional CONstellation and Scientific Analysis Studies of the Next Generation Gravity Mission) a variety of mission design parameters for such constellations are investigated by full numerical simulations. These simulations aim at investigating the impact of several orbit design choices and at the mitigation of aliasing errors in the gravity field retrieval by co-parametrization for various constellations of Bender-type NGGMs. Choices for orbit design parameters such as altitude profiles during mission lifetime, length of retrieval period, value of sub-cycles and choice of prograde versus retrograde orbits are investigated as well. Results of these simulations are presented and optimal constellations for NGGM's are identified. Finally, a short outlook towards new geophysical applications like a near real time service for hydrology is given.

  9. Aliased tidal errors in TOPEX/POSEIDON sea surface height data

    NASA Technical Reports Server (NTRS)

    Schlax, Michael G.; Chelton, Dudley B.

    1994-01-01

    Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.

  10. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  11. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  12. The Influence of Gantry Geometry on Aliasing and Other Geometry Dependent Errors

    NASA Astrophysics Data System (ADS)

    Joseph, Peter M.

    1980-06-01

    At least three gantry geometries are widely used in medical CT scanners: (1) rotate-translate, (2) rotating detectors, (3) stationary detectors. There are significant geometrical differences between these designs, especially regarding (a) the region of space scanned by any given detector and (b) the sample density of rays which scan the patient. It is imperative to distinguish between "views" and "rays" in analyzing this situation. In particular, views are defined by the x-ray source in type 2 and by the detector in type 3 gantries. It is known that ray dependent errors are generally much more important than view dependent errors. It is shown that spatial resolution is primarily limited by the spacing between rays in any view, while the number of ray samples per beam width determines the extent of aliasing artifacts. Rotating detector gantries are especially susceptible to aliasing effects. It is shown that aliasing effects can distort the point spread function in a way that is highly dependent on the position of the point in the scanned field. Such effects can cause anomalies in the MTF functions as derived from points in machines with significant aliasing problems.

  13. Simulation Study of a Follow-on Gravity Mission to GRACE

    NASA Technical Reports Server (NTRS)

    Loomis, Bryant D.; Nerem, R. S.; Luthcke, Scott B.

    2012-01-01

    The gravity recovery and climate experiment (GRACE) has been providing monthly estimates of the Earth's time-variable gravity field since its launch in March 2002. The GRACE gravity estimates are used to study temporal mass variations on global and regional scales, which are largely caused by a redistribution of water mass in the Earth system. The accuracy of the GRACE gravity fields are primarily limited by the satellite-to-satellite range-rate measurement noise, accelerometer errors, attitude errors, orbit errors, and temporal aliasing caused by unmodeled high-frequency variations in the gravity signal. Recent work by Ball Aerospace and Technologies Corp., Boulder, CO has resulted in the successful development of an interferometric laser ranging system to specifically address the limitations of the K-band microwave ranging system that provides the satellite-to-satellite measurements for the GRACE mission. Full numerical simulations are performed for several possible configurations of a GRACE Follow-On (GFO) mission to determine if a future satellite gravity recovery mission equipped with a laser ranging system will provide better estimates of time-variable gravity, thus benefiting many areas of Earth systems research. The laser ranging system improves the range-rate measurement precision to approximately 0.6 nm/s as compared to approx. 0.2 micro-seconds for the GRACE K-band microwave ranging instrument. Four different mission scenarios are simulated to investigate the effect of the better instrument at two different altitudes. The first pair of simulated missions is flown at GRACE altitude (approx. 480 km) assuming on-board accelerometers with the same noise characteristics as those currently used for GRACE. The second pair of missions is flown at an altitude of approx. 250 km which requires a drag-free system to prevent satellite re-entry. In addition to allowing a lower satellite altitude, the drag-free system also reduces the errors associated with the accelerometer. All simulated mission scenarios assume a two satellite co-orbiting pair similar to GRACE in a near-polar, near-circular orbit. A method for local time variable gravity recovery through mass concentration blocks (mascons) is used to form simulated gravity estimates for Greenland and the Amazon region for three GFO configurations and GRACE. Simulation results show that the increased precision of the laser does not improve gravity estimation when flown with on-board accelerometers at the same altitude and spacecraft separation as GRACE, even when time-varying background models are not included. This study also shows that only modest improvement is realized for the best-case scenario (laser, low-altitude, drag-free) as compared to GRACE due to temporal aliasing errors. These errors are caused by high-frequency variations in the hydrology signal and imperfections in the atmospheric, oceanographic, and tidal models which are used to remove unwanted signal. This work concludes that applying the updated technologies alone will not immediately advance the accuracy of the gravity estimates. If the scientific objectives of a GFO mission require more accurate gravity estimates, then future work should focus on improvements in the geophysical models, and ways in which the mission design or data processing could reduce the effects of temporal aliasing.

  14. An interactive Doppler velocity dealiasing scheme

    NASA Astrophysics Data System (ADS)

    Pan, Jiawen; Chen, Qi; Wei, Ming; Gao, Li

    2009-10-01

    Doppler weather radars are capable of providing high quality wind data at a high spatial and temporal resolution. However, operational application of Doppler velocity data from weather radars is hampered by the infamous limitation of the velocity ambiguity. This paper reviews the cause of velocity folding and presents the unfolding method recently implemented for the CINRAD systems. A simple interactive method for velocity data, which corrects de-aliasing errors, has been developed and tested. It is concluded that the algorithm is very efficient and produces high quality velocity data.

  15. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  16. A simulation for gravity fine structure recovery from low-low GRAVSAT SST data

    NASA Technical Reports Server (NTRS)

    Estes, R. H.; Lancaster, E. R.

    1976-01-01

    Covariance error analysis techniques were applied to investigate estimation strategies for the low-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. A 5 degree by 5 degree surface density block representation of the high order geopotential was utilized with the drag-free low-low GRAVSAT configuration in a circular polar orbit at 250 km altitude. Recovery of local sets of density blocks from long data arcs was found not to be feasible due to strong aliasing effects. The error analysis for the recovery of local sets of density blocks using independent short data arcs demonstrated that the estimation strategy of simultaneously estimating a local set of blocks covered by data and two "buffer layers" of blocks not covered by data greatly reduced aliasing errors.

  17. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  18. GRAVSAT/GEOPAUSE covariance analysis including geopotential aliasing

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1975-01-01

    A conventional covariance analysis for the GRAVSAT/GEOPAUSE mission is described in which the uncertainties of approximately 200 parameters, including the geopotential coefficients to degree and order 12, are estimated over three different tracking intervals. The estimated orbital uncertainties for both GRAVSAT and GEOPAUSE reach levels more accurate than presently available. The adjusted measurement bias errors approach the mission goal. Survey errors in the low centimeter range are achieved after ten days of tracking. The ability of the mission to obtain accuracies of geopotential terms to (12, 12) one to two orders of magnitude superior to present accuracy levels is clearly shown. A unique feature of this report is that the aliasing structure of this (12, 12) field is examined. It is shown that uncertainties for unadjusted terms to (12, 12) still exert a degrading effect upon the adjusted error of an arbitrarily selected term of lower degree and order. Finally, the distribution of the aliasing from the unestimated uncertainty of a particular high degree and order geopotential term upon the errors of all remaining adjusted terms is listed in detail.

  19. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.

  20. A Simple Approach to Fourier Aliasing

    ERIC Educational Resources Information Center

    Foadi, James

    2007-01-01

    In the context of discrete Fourier transforms the idea of aliasing as due to approximation errors in the integral defining Fourier coefficients is introduced and explained. This has the positive pedagogical effect of getting to the heart of sampling and the discrete Fourier transform without having to delve into effective, but otherwise long and…

  1. Site Distribution and Aliasing Effects in the Inversion for Load Coefficients and Geocenter Motion from GPS Data

    NASA Technical Reports Server (NTRS)

    Wu, Xiaoping; Argus, Donald F.; Heflin, Michael B.; Ivins, Erik R.; Webb, Frank H.

    2002-01-01

    Precise GPS measurements of elastic relative site displacements due to surface mass loading offer important constraints on global surface mass transport. We investigate effects of site distribution and aliasing by higher-degree (n greater than or equal 2) loading terms on inversion of GPS data for n = 1 load coefficients and geocenter motion. Covariance and simulation analyses are conducted to assess the sensitivity of the inversion to aliasing and mismodeling errors and possible uncertainties in the n = 1 load coefficient determination. We found that the use of center-of-figure approximation in the inverse formulation could cause 10- 15% errors in the inverted load coefficients. n = 1 load estimates may be contaminated significantly by unknown higher-degree terms, depending on the load scenario and the GPS site distribution. The uncertainty in n = 1 zonal load estimate is at the level of 80 - 95% for two load scenarios.

  2. The resolution capability of an irregularly sampled dataset: With application to Geosat altimeter data

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1994-01-01

    A formalism is presented for determining the wavenumber-frequency transfer function associated with an irregularly sampled multidimensional dataset. This transfer function reveals the filtering characteristics and aliasing patterns inherent in the sample design. In combination with information about the spectral characteristics of the signal, the transfer function can be used to quantify the spatial and temporal resolution capability of the dataset. Application of the method to idealized Geosat altimeter data (i.e., neglecting measurement errors and data dropouts) concludes that the Geosat orbit configuration is capable of resolving scales of about 3 deg in latitude and longitude by about 30 days.

  3. Separation of parallel encoded complex-valued slices (SPECS) from a single complex-valued aliased coil image.

    PubMed

    Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C

    2016-04-01

    Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. On the formulation of gravitational potential difference between the GRACE satellites based on energy integral in Earth fixed frame

    NASA Astrophysics Data System (ADS)

    Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.

    2015-09-01

    Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.

  5. Reconstruction of dynamic image series from undersampled MRI data using data-driven model consistency condition (MOCCO).

    PubMed

    Velikina, Julia V; Samsonov, Alexey A

    2015-11-01

    To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.

  6. RECONSTRUCTION OF DYNAMIC IMAGE SERIES FROM UNDERSAMPLED MRI DATA USING DATA-DRIVEN MODEL CONSISTENCY CONDITION (MOCCO)

    PubMed Central

    Velikina, Julia V.; Samsonov, Alexey A.

    2014-01-01

    Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724

  7. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  8. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE PAGES

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...

    2016-12-05

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  9. Angular oversampling with temporally offset layers on multilayer detectors in computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjölin, Martin, E-mail: martin.sjolin@mi.physics.kth.se; Danielsson, Mats

    2016-06-15

    Purpose: Today’s computed tomography (CT) scanners operate at an increasingly high rotation speed in order to reduce motion artifacts and to fulfill the requirements of dynamic acquisition, e.g., perfusion and cardiac imaging, with lower angular sampling rate as a consequence. In this paper, a simple method for obtaining angular oversampling when using multilayer detectors in continuous rotation CT is presented. Methods: By introducing temporal offsets between the measurement periods of the different layers on a multilayer detector, the angular sampling rate can be increased by a factor equal to the number of layers on the detector. The increased angular samplingmore » rate reduces the risk of producing aliasing artifacts in the image. A simulation of a detector with two layers is performed to prove the concept. Results: The simulation study shows that aliasing artifacts from insufficient angular sampling are reduced by the proposed method. Specifically, when imaging a single point blurred by a 2D Gaussian kernel, the method is shown to reduce the strength of the aliasing artifacts by approximately an order of magnitude. Conclusions: The presented oversampling method is easy to implement in today’s multilayer detectors and has the potential to reduce aliasing artifacts in the reconstructed images.« less

  10. Recovery of an evolving magnetic flux rope in the solar wind: Decomposing spatial and temporal variations from single-spacecraft data

    NASA Astrophysics Data System (ADS)

    Hasegawa, H.; Sonnerup, B.; Hu, Q.; Nakamura, T.

    2013-12-01

    We present a novel single-spacecraft data analysis method for decomposing spatial and temporal variations of physical quantities at points along the path of a spacecraft in spacetime. The method is designed for use in the reconstruction of slowly evolving two-dimensional, magneto-hydrostatic structures (Grad-Shafranov equilibria) in a space plasma. It is an extension of the one developed by Sonnerup and Hasegawa [2010] and Hasegawa et al. [2010], in which it was assumed that variations in the time series of data, recorded as the structures move past the spacecraft, are all due to spatial effects. In reality, some of the observed variations are usually caused by temporal evolution of the structure during the time it moves past the observing spacecraft; the information in the data about the spatial structure is aliased by temporal effects. The purpose here is to remove this time aliasing from the reconstructed maps of field and plasma properties. Benchmark tests are performed by use of synthetic data taken by a virtual spacecraft as it traverses, at a constant velocity, a slowly growing magnetic flux rope in a two-dimensional magnetohydrodynamic simulation of magnetic reconnection. These tests show that the new method can better recover the spacetime behavior of the flux rope than does the original version, in which time aliasing effects had not been removed. An application of the new method to a solar wind flux rope, observed by the ACE spacecraft, suggests that it was evolving in a significant way during the ~17 hour interval of the traversal. References Hasegawa, H., B. U. Ö. Sonnerup, and T. K. M. Nakamura (2010), Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event, J. Geophys. Res., 115, A11219, doi:10.1029/2010JA015679. Sonnerup, B. U. Ö., and H. Hasegawa (2010), On slowly evolving Grad-Shafranov equilibria, J. Geophys. Res., 115, A11218, doi:10.1029/2010JA015678. Magnetic field maps recovered from (a) the aliased (original) and (b) de-aliased (new) versions of the time evolution method. Colors show the out-of-plane (z) magnetic field component, and white arrows at points along y = 0 show the transverse velocities obtained from the reconstruction. The blue diamonds in panels (b) mark the location of the ACE spacecraft.

  11. Exploiting the Modified Colombo-Nyquist Rule for Co-estimating Sub-monthly Gravity Field Solutions from a GRACE-like Mission

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Weigelt, M.; Mueller, J.

    2017-12-01

    In order to suppress the impact of aliasing errors on the standard monthly GRACE gravity-field solutions, co-estimating sub-monthly (daily/two-day) low-degree solutions has been suggested as a solution. The maximum degree of the low-degree solutions is chosen via the Colombo-Nyquist rule of thumb. However, it is now established that the sampling of satellites puts a restriction on the maximum estimable order and not the degree - modified Colombo-Nyquist rule. Therefore, in this contribution, we co-estimate low-order sub-monthly solutions, and compare and contrast them with the low-degree sub-monthly solutions. We also investigate their efficacies in dealing with aliasing errors.

  12. Blind Compressed Sensing Enables 3-Dimensional Dynamic Free Breathing Magnetic Resonance Imaging of Lung Volumes and Diaphragm Motion.

    PubMed

    Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews

    2016-06-01

    The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.

  13. Receptoral and Neural Aliasing.

    DTIC Science & Technology

    1993-01-30

    standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits

  14. Sampling frequency for water quality variables in streams: Systems analysis to quantify minimum monitoring rates.

    PubMed

    Chappell, Nick A; Jones, Timothy D; Tych, Wlodek

    2017-10-15

    Insufficient temporal monitoring of water quality in streams or engineered drains alters the apparent shape of storm chemographs, resulting in shifted model parameterisations and changed interpretations of solute sources that have produced episodes of poor water quality. This so-called 'aliasing' phenomenon is poorly recognised in water research. Using advances in in-situ sensor technology it is now possible to monitor sufficiently frequently to avoid the onset of aliasing. A systems modelling procedure is presented allowing objective identification of sampling rates needed to avoid aliasing within strongly rainfall-driven chemical dynamics. In this study aliasing of storm chemograph shapes was quantified by changes in the time constant parameter (TC) of transfer functions. As a proportion of the original TC, the onset of aliasing varied between watersheds, ranging from 3.9-7.7 to 54-79 %TC (or 110-160 to 300-600 min). However, a minimum monitoring rate could be identified for all datasets if the modelling results were presented in the form of a new statistic, ΔTC. For the eight H + , DOC and NO 3 -N datasets examined from a range of watershed settings, an empirically-derived threshold of 1.3(ΔTC) could be used to quantify minimum monitoring rates within sampling protocols to avoid artefacts in subsequent data analysis. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. On the aliasing of the solar cycle in the lower stratospheric tropical temperature

    NASA Astrophysics Data System (ADS)

    Kuchar, Ales; Ball, William T.; Rozanov, Eugene V.; Stenke, Andrea; Revell, Laura; Miksovsky, Jiri; Pisoft, Petr; Peter, Thomas

    2017-09-01

    The double-peaked response of the tropical stratospheric temperature profile to the 11 year solar cycle (SC) has been well documented. However, there are concerns about the origin of the lower peak due to potential aliasing with volcanic eruptions or the El Niño-Southern Oscillation (ENSO) detected using multiple linear regression analysis. We confirm the aliasing using the results of the chemistry-climate model (CCM) SOCOLv3 obtained in the framework of the International Global Atmospheric Chemisty/Stratosphere-troposphere Processes And their Role in Climate Chemistry-Climate Model Initiative phase 1. We further show that even without major volcanic eruptions included in transient simulations, the lower stratospheric response exhibits a residual peak when historical sea surface temperatures (SSTs)/sea ice coverage (SIC) are used. Only the use of climatological SSTs/SICs in addition to background stratospheric aerosols removes volcanic and ENSO signals and results in an almost complete disappearance of the modeled solar signal in the lower stratospheric temperature. We demonstrate that the choice of temporal subperiod considered for the regression analysis has a large impact on the estimated profile signal in the lower stratosphere: at least 45 consecutive years are needed to avoid the large aliasing effect of SC maxima with volcanic eruptions in 1982 and 1991 in historical simulations, reanalyses, and observations. The application of volcanic forcing compiled for phase 6 of the Coupled Model Intercomparison Project (CMIP6) in the CCM SOCOLv3 reduces the warming overestimation in the tropical lower stratosphere and the volcanic aliasing of the temperature response to the SC, although it does not eliminate it completely.

  16. Evaluation of slice accelerations using multiband echo planar imaging at 3 Tesla

    PubMed Central

    Xu, Junqian; Moeller, Steen; Auerbach, Edward J.; Strupp, John; Smith, Stephen M.; Feinberg, David A.; Yacoub, Essa; Uğurbil, Kâmil

    2013-01-01

    We evaluate residual aliasing among simultaneously excited and acquired slices in slice accelerated multiband (MB) echo planar imaging (EPI). No in-plane accelerations were used in order to maximize and evaluate achievable slice acceleration factors at 3 Tesla. We propose a novel leakage (L-) factor to quantify the effects of signal leakage between simultaneously acquired slices. With a standard 32-channel receiver coil at 3 Tesla, we demonstrate that slice acceleration factors of up to eight (MB = 8) with blipped controlled aliasing in parallel imaging (CAIPI), in the absence of in-plane accelerations, can be used routinely with acceptable image quality and integrity for whole brain imaging. Spectral analyses of single-shot fMRI time series demonstrate that temporal fluctuations due to both neuronal and physiological sources were distinguishable and comparable up to slice-acceleration factors of nine (MB = 9). The increased temporal efficiency could be employed to achieve, within a given acquisition period, higher spatial resolution, increased fMRI statistical power, multiple TEs, faster sampling of temporal events in a resting state fMRI time series, increased sampling of q-space in diffusion imaging, or more quiet time during a scan. PMID:23899722

  17. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  18. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  19. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  20. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  1. Mapping GRACE Accelerometer Error

    NASA Astrophysics Data System (ADS)

    Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.

    2017-12-01

    After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.

  2. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  3. De-aliasing for signal restoration in Propeller MR imaging.

    PubMed

    Chiu, Su-Chin; Chang, Hing-Chiu; Chu, Mei-Lan; Wu, Ming-Long; Chung, Hsiao-Wen; Lin, Yi-Ru

    2017-02-01

    Objects falling outside of the true elliptical field-of-view (FOV) in Propeller imaging show unique aliasing artifacts. This study proposes a de-aliasing approach to restore the signal intensities in Propeller images without extra data acquisition. Computer simulation was performed on the Shepp-Logan head phantom deliberately placed obliquely to examine the signal aliasing. In addition, phantom and human imaging experiments were performed using Propeller imaging with various readouts on a 3.0 Tesla MR scanner. De-aliasing using the proposed method was then performed, with the first low-resolution single-blade image used to find out the aliasing patterns in all the single-blade images, followed by standard Propeller reconstruction. The Propeller images without and with de-aliasing were compared. Computer simulations showed signal loss at the image corners along with aliasing artifacts distributed along directions corresponding to the rotational blades, consistent with clinical observations. The proposed de-aliasing operation successfully restored the correct images in both phantom and human experiments. The de-aliasing operation is an effective adjunct to Propeller MR image reconstruction for retrospective restoration of aliased signals. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    USGS Publications Warehouse

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using linear time interpolation for the resamplin

  5. Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.

    PubMed

    Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi

    2017-05-01

    When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.

  6. Fluid Motion and the Toroidal Magnetic Field Near the Top of Earth's Liquid Outer Core.

    NASA Astrophysics Data System (ADS)

    Celaya, Michael Augustine

    This work considers two unresolved problems central to the study of Earth's deep interior: (1) What is the surface flow of the complete three dimensional motion sustaining the geomagnetic field in the fluid outer core? (2) How strong is the toroidal component of that field just beneath the mantle inside the core?. A solution of these problems is necessary to achieve even a basic understanding of magnetic field generation, and core-mantle interactions. Progress in solving (1) is made by extending previous attempts to resolve the core surface flow, and identifying obstacles which lead to distorted solutions. The extension relaxes the steady motions constraint. This permits more realistic solutions which should resemble more closely the real Earth flow. A difficulty with the assumption of steady flow is that if the real motion is unsteady, as it is likely to be, then steady models will suffer from aliasing. Aliased solutions can be highly corrupted. The effects of aliasing incurred through model underparametrization are explored. It is found that flow spectral energy must fall rapidly with increasing degree to escape aliasing's distortion. Damping does not appear to remedy the problem, but in fact obscures it by forcing the solution to converge upon a single, but possibly still aliased estimate. Inversions of a magnetic field model for unsteady motions, indicate steady flows are indeed aliased in time. By comparison, unsteady flows appear free of aliasing and show significant temporal variation, changing by about 30% of their magnitude over 20 years. However, it appears that noise in the high degree secular variation (SV) data used to determine the flow acts as a further impediment to solving (1). Damping is shown to be effective in removing noise, but only once aliasing is no longer a factor and noise is restricted to that part of the SV which makes only a small contribution to the solution. To solve (2) the radial component of Ohm's law is inverted for the toroidal field (B_{T }) near the top of the corp. The flow, obtained as a solution to (1), is treated as a known quantity, as is the poloidal field. Solutions are sought which minimize the difference between observed and predicted poloidal main field at Earth's surface. As in problem (1), aliasing in space and time stand as potential impediments to good resolution of the toroidal field. Steady degree 10 models of B_{T} are obtained which display convergence in space and time without damping. Poloidal field noise, as well as sensitivity to the flow model used in the inversions, limit resolution of toroidal field geometry. Nevertheless, estimates indicate the magnitude of B_{T } does not exceed 8times 10^ {-5}T, or about half that of the poloidal field near the core surface. Such a low value favors weak -field dynamo models but does not necessarily endorse a geostrophic force balance just beneath the mantle because partial_{r}B _{T} may be large enough to violate conditions required by geostrophy.

  7. Subdaily alias and draconitic errors in the IGS orbits

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Ray, J.

    2011-12-01

    Harmonic signals with a fundamental period near the GPS draconitic year (351.2 d) and overtones up to the 8th multiple have been observed in the power spectra of nearly all products of the International GNSS Service (IGS), including station position time series [Ray et al., 2008; Collilieux et al., 2007; Santamaría-Gómez et al., 2011], apparent geocenter motions [Hugentobler et al., 2008], and orbit jumps between successive days and midnight discontinuities in Earth orientation parameter (EOP) rates [Ray and Griffiths, 2009]. Ray et al. [2008] suggested two mechanisms for the harmonics: mismodeling of orbit dynamics and aliasing of near-sidereal local station multipath effects. King and Watson [2010] have studied the propagation of local multipath errors into draconitic position variations, but orbit-related processes have been less well examined. Here we elaborate our earlier analysis of GPS orbit jumps [Griffiths and Ray, 2009; Gendt et al., 2010] where we observed some draconitic features as well as prominent spectral bands near 29, 14, 9, and 7 d periods. Finer structures within the sub-seasonal bands fall close to the expected alias frequencies of subdaily EOP tide lines but do not coincide precisely. While once-per-rev empirical orbit parameters should strongly absorb any subdaily EOP tide errors due to near-resonance of their respective periods, the observed differences require explanation. This has been done by simulating known EOP tidal errors and checking their impact on a long series of daily GPS orbits. Indeed, simulated tidal aliases are found to be very similar to the observed orbital features in the sub-seasonal bands. Moreover and unexpectedly, some low draconitic harmonics were also stimulated, potentially a source for the widespread errors in most IGS products.

  8. Cosine beamforming

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Wapenaar, Kees

    2014-05-01

    In various application areas, e.g., seismology, astronomy and geodesy, arrays of sensors are used to characterize incoming wavefields due to distant sources. Beamforming is a general term for phased-adjusted summations over the different array elements, for untangling the directionality and elevation angle of the incoming waves. For characterizing noise sources, beamforming is conventionally applied with a temporal Fourier and a 2D spatial Fourier transform, possibly with additional weights. These transforms become aliased for higher frequencies and sparser array-element distributions. As a partial remedy, we derive a kernel for beamforming crosscorrelated data and call it cosine beamforming (CBF). By applying beamforming not directly to the data, but to crosscorrelated data, the sampling is effectively increased. We show that CBF, due to this better sampling, suffers less from aliasing and yields higher resolution than conventional beamforming. As a flip-side of the coin, the CBF output shows more smearing for spherical waves than conventional beamforming.

  9. Precise automatic differential stellar photometry

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.; Genet, Russell M.; Boyd, Louis J.; Borucki, William J.; Lockwood, G. Wesley

    1991-01-01

    The factors limiting the precision of differential stellar photometry are reviewed. Errors due to variable atmospheric extinction can be reduced to below 0.001 mag at good sites by utilizing the speed of robotic telescopes. Existing photometric systems produce aliasing errors, which are several millimagnitudes in general but may be reduced to about a millimagnitude in special circumstances. Conventional differential photometry neglects several other important effects, which are discussed in detail. If all of these are properly handled, it appears possible to do differential photometry of variable stars with an overall precision of 0.001 mag with ground based robotic telescopes.

  10. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  11. On the use of kinetic energy preserving DG-schemes for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Flad, David; Gassner, Gregor

    2017-12-01

    Recently, element based high order methods such as Discontinuous Galerkin (DG) methods and the closely related flux reconstruction (FR) schemes have become popular for compressible large eddy simulation (LES). Element based high order methods with Riemann solver based interface numerical flux functions offer an interesting dispersion dissipation behavior for multi-scale problems: dispersion errors are very low for a broad range of scales, while dissipation errors are very low for well resolved scales and are very high for scales close to the Nyquist cutoff. In some sense, the inherent numerical dissipation caused by the interface Riemann solver acts as a filter of high frequency solution components. This observation motivates the trend that element based high order methods with Riemann solvers are used without an explicit LES model added. Only the high frequency type inherent dissipation caused by the Riemann solver at the element interfaces is used to account for the missing sub-grid scale dissipation. Due to under-resolution of vortical dominated structures typical for LES type setups, element based high order methods suffer from stability issues caused by aliasing errors of the non-linear flux terms. A very common strategy to fight these aliasing issues (and instabilities) is so-called polynomial de-aliasing, where interpolation is exchanged with projection based on an increased number of quadrature points. In this paper, we start with this common no-model or implicit LES (iLES) DG approach with polynomial de-aliasing and Riemann solver dissipation and review its capabilities and limitations. We find that the strategy gives excellent results, but only when the resolution is such, that about 40% of the dissipation is resolved. For more realistic, coarser resolutions used in classical LES e.g. of industrial applications, the iLES DG strategy becomes quite inaccurate. We show that there is no obvious fix to this strategy, as adding for instance a sub-grid-scale models on top doesn't change much or in worst case decreases the fidelity even more. Finally, the core of this work is a novel LES strategy based on split form DG methods that are kinetic energy preserving. The scheme offers excellent stability with full control over the amount and shape of the added artificial dissipation. This premise is the main idea of the work and we will assess the LES capabilities of the novel split form DG approach when applied to shock-free, moderate Mach number turbulence. We will demonstrate that the novel DG LES strategy offers similar accuracy as the iLES methodology for well resolved cases, but strongly increases fidelity in case of more realistic coarse resolutions.

  12. Error analysis for spectral approximation of the Korteweg-De Vries equation

    NASA Technical Reports Server (NTRS)

    Maday, Y.; recent years.

    1987-01-01

    The conservation and convergence properties of spectral Fourier methods for the numerical approximation of the Korteweg-de Vries equation are analyzed. It is proved that the (aliased) collocation pseudospectral method enjoys the same convergence properties as the spectral Galerkin method, which is less effective from the computational point of view. This result provides a precise mathematical answer to a question raised by several authors in recent years.

  13. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  14. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Shearlet transform in aliased ground roll attenuation and its comparison with f-k filtering and curvelet transform

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-06-01

    Ground roll, which is a Rayleigh surface wave that exists in land seismic data, may mask reflections. Sometimes ground roll is spatially aliased. Attenuation of aliased ground roll is of importance in seismic data processing. Different methods have been developed to attenuate ground roll. The shearlet transform is a directional and multidimensional transform that generates subimages of an input image in different directions and scales. Events with different dips are separated in these subimages. In this study, the shearlet transform is used to attenuate the aliased ground roll. To do this, a shot record is divided into several segments, and the appropriate mute zone is defined for all segments. The shearlet transform is applied to each segment. The subimages related to the non-aliased and aliased ground roll are identified by plotting the energy distributions of subimages with visual checking. Then, muting filters are used on selected subimages. The inverse shearlet transform is applied to the filtered segment. This procedure is repeated for all segments. Finally, all filtered segments are merged using the Hanning window. This method of aliased ground roll attenuation was tested on a synthetic dataset and a field shot record from the west of Iran. The synthetic shot record included strong aliased ground roll, whereas the field shot record did not. To produce the strong aliased ground roll on the field shot record, the data were resampled in the offset direction from 30 to 60 m. To show the performance of the shearlet transform in attenuating the aliased ground roll, we compared the shearlet transform with the f-k filtering and curvelet transform. We showed that the performance of the shearlet transform in the aliased ground roll attenuation is better than that of the f-k filtering and curvelet transform in both the synthetic and field shot records. However, when the dip and frequency content of the aliased ground roll are the same as the reflections, ability of the shearlet transform is limited in attenuating the aliased ground roll.

  16. 76 FR 21628 - Implementation of Additional Changes From the Annual Review of the Entity List; Removal of Person...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... Engineering Physics.'' The changes included revising the entry to add additional aliases for that entry. The... listing the aliases as separate aliases for the Chinese Academy of Engineering Physics. China (1) Chinese Academy of Engineering Physics, a.k.a., the following nineteen aliases: --Ninth Academy; --Southwest...

  17. Space and time aliasing structure is monthly mean polar-orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Zeng, Lixin; Levy, Gad

    1995-01-01

    Monthly mean wind fields from the European Remote Sensing Satellite (ERS1) scatterometer are presented. A banded structure which resembles the satellite subtrack is clearly and consistently apparent in the isotachs as well as the u and v components of the routinely produced fields. The structure also appears in the means of data from other polar-orbiting satellites and instruments. An experiment is designed to trace the cause of the banded structure. The European Centre for Medium-Range Weather Forecast (ECMWF) gridded surface wind analyses are used as a control set. These analyses are also sampled with the ERS1 temporal-spatial samplig pattern to form a simulated scatterometer wind set. Both sets are used to create monthly averages. The banded structures appear in the monthly mean simulated data but do not appear in the control set. It is concluded that the source of the banded structure lies in the spatial and temporal sampling of the polar-orbiting satellite which results in undersampling. The problem involves multiple timescales and space scales, oversampling and under-sampling in space, aliasing in the time and space domains, and preferentially sampled variability. It is shown that commonly used spatial smoothers (or filters), while producing visually pleasing results, also significantly bias the true mean. A three-dimensional spatial-temporal interpolator is designed and used to determine the mean field. It is found to produce satisfactory monthly means from both simulated and real ERS1 data. The implications to climate studies involving polar-orbiting satellite data are discussed.

  18. Potential and Pitfalls of High-Rate GPS

    NASA Astrophysics Data System (ADS)

    Smalley, R.

    2008-12-01

    With completion of the Plate Boundary Observatory (PBO), we are poised to capture a dense sampling of strong motion displacement time series from significant earthquakes in western North America with High-Rate GPS (HRGPS) data collected at 1 and 5 Hz. These data will provide displacement time series at potentially zero epicentral distance that, if valid, have great potential to contribute to understanding earthquake rupture processes. The caveat relates to whether or not the data are aliased: is the sampling rate fast enough to accurately capture the displacement's temporal history? Using strong motion recordings in the immediate epicentral area of several 6.77.5 events, which can be reasonably expected in the PBO footprint, even the 5 Hz data may be aliased. Some sort of anti-alias processing, currently not applied, will therefore necessary at the closest stations to guarantee the veracity of the displacement time series. We discuss several solutions based on a-priori knowledge of the expected ground motion and practicality of implementation.

  19. USING LEAKED POWER TO MEASURE INTRINSIC AGN POWER SPECTRA OF RED-NOISE TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, S. F.; Xue, Y. Q., E-mail: zshifu@mail.ustc.edu.cn, E-mail: xuey@ustc.edu.cn

    Fluxes emitted at different wavebands from active galactic nuclei (AGNs) fluctuate at both long and short timescales. The variation can typically be characterized by a broadband power spectrum, which exhibits a red-noise process at high frequencies. The standard method of estimating the power spectral density (PSD) of AGN variability is easily affected by systematic biases such as red-noise leakage and aliasing, in particular when the observation spans a relatively short period and is gapped. Focusing on the high-frequency PSD that is strongly distorted due to red-noise leakage and usually not significantly affected by aliasing, we develop a novel and observablemore » normalized leakage spectrum (NLS), which sensitively describes the effects of leaked red-noise power on the PSD at different temporal frequencies. Using Monte Carlo simulations, we demonstrate how an AGN underlying PSD sensitively determines the NLS when there is severe red-noise leakage, and thereby how the NLS can be used to effectively constrain the underlying PSD.« less

  20. Digital Moiré based transient interferometry and its application in optical surface measurement

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Tan, Yifeng; Wang, Shaopu; Hu, Yao

    2017-10-01

    Digital Moiré based transient interferometry (DMTI) is an effective non-contact testing methods for optical surfaces. In DMTI system, only one frame of real interferogram is experimentally captured for the transient measurement of the surface under test (SUT). When combined with partial compensation interferometry (PCI), DMTI is especially appropriate for the measurement of aspheres with large apertures, large asphericity or different surface parameters. Residual wavefront is allowed in PCI, so the same partial compensator can be applied to the detection of multiple SUTs. Excessive residual wavefront aberration results in spectrum aliasing, and the dynamic range of DMTI is limited. In order to solve this problem, a method based on wavelet transform is proposed to extract phase from the fringe pattern with spectrum aliasing. Results of simulation demonstrate the validity of this method. The dynamic range of Digital Moiré technology is effectively expanded, which makes DMTI prospective in surface figure error measurement for intelligent fabrication of aspheric surfaces.

  1. Interleaved EPI based fMRI improved by multiplexed sensitivity encoding (MUSE) and simultaneous multi-band imaging.

    PubMed

    Chang, Hing-Chiu; Gaur, Pooja; Chou, Ying-hui; Chu, Mei-Lan; Chen, Nan-kuei

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive and powerful imaging tool for detecting brain activities. The majority of fMRI studies are performed with single-shot echo-planar imaging (EPI) due to its high temporal resolution. Recent studies have demonstrated that, by increasing the spatial-resolution of fMRI, previously unidentified neuronal networks can be measured. However, it is challenging to improve the spatial resolution of conventional single-shot EPI based fMRI. Although multi-shot interleaved EPI is superior to single-shot EPI in terms of the improved spatial-resolution, reduced geometric distortions, and sharper point spread function (PSF), interleaved EPI based fMRI has two main limitations: 1) the imaging throughput is lower in interleaved EPI; 2) the magnitude and phase signal variations among EPI segments (due to physiological noise, subject motion, and B0 drift) are translated to significant in-plane aliasing artifact across the field of view (FOV). Here we report a method that integrates multiple approaches to address the technical limitations of interleaved EPI-based fMRI. Firstly, the multiplexed sensitivity-encoding (MUSE) post-processing algorithm is used to suppress in-plane aliasing artifacts resulting from time-domain signal instabilities during dynamic scans. Secondly, a simultaneous multi-band interleaved EPI pulse sequence, with a controlled aliasing scheme incorporated, is implemented to increase the imaging throughput. Thirdly, the MUSE algorithm is then generalized to accommodate fMRI data obtained with our multi-band interleaved EPI pulse sequence, suppressing both in-plane and through-plane aliasing artifacts. The blood-oxygenation-level-dependent (BOLD) signal detectability and the scan throughput can be significantly improved for interleaved EPI-based fMRI. Our human fMRI data obtained from 3 Tesla systems demonstrate the effectiveness of the developed methods. It is expected that future fMRI studies requiring high spatial-resolvability and fidelity will largely benefit from the reported techniques.

  2. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  3. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  4. Adaptive attenuation of aliased ground roll using the shearlet transform

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  5. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  6. Wiener-matrix image restoration beyond the sampling passband

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    A finer-than-sampling-lattice resolution image can be obtained using multiresponse image gathering and Wiener-matrix restoration. The multiresponse image gathering weighs the within-passband and aliased signal components differently, allowing the Wiener-matrix restoration filter to unscramble these signal components and restore spatial frequencies beyond the sampling passband of the photodetector array. A multiresponse images can be reassembled into a single minimum mean square error image with a resolution that is sq rt A times finer than the photodetector-array sampling lattice.

  7. Reconstruction of full high-resolution HSQC using signal split in aliased spectra.

    PubMed

    Foroozandeh, Mohammadali; Jeannerat, Damien

    2015-11-01

    Resolution enhancement is a long-sought goal in NMR spectroscopy. In conventional multidimensional NMR experiments, such as the (1) H-(13) C HSQC, the resolution in the indirect dimensions is typically 100 times lower as in 1D spectra because it is limited by the experimental time. Reducing the spectral window can significantly increase the resolution but at the cost of ambiguities in frequencies as a result of spectral aliasing. Fortunately, this information is not completely lost and can be retrieved using methods in which chemical shifts are encoded in the aliased spectra and decoded after processing to reconstruct high-resolution (1) H-(13) C HSQC spectrum with full spectral width and a resolution similar to that of 1D spectra. We applied a new reconstruction method, RHUMBA (reconstruction of high-resolution using multiplet built on aliased spectra), to spectra obtained from the differential evolution for non-ambiguous aliasing-HSQC and the new AMNA (additional modulation for non-ambiguous aliasing)-HSQC experiments. The reconstructed spectra significantly facilitate both manual and automated spectral analyses and structure elucidation based on heteronuclear 2D experiments. The resolution is enhanced by two orders of magnitudes without the usual complications due to spectral aliasing. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Improvements to photometry. Part 1: Better estimation of derivatives in extinction and transformation equations

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.

    1988-01-01

    Atmospheric extinction in wideband photometry is examined both analytically and through numerical simulations. If the derivatives that appear in the Stromgren-King theory are estimated carefully, it appears that wideband measurements can be transformed to outside the atmosphere with errors no greater than a millimagnitude. A numerical analysis approach is used to estimate derivatives of both the stellar and atmospheric extinction spectra, avoiding previous assumptions that the extinction follows a power law. However, it is essential to satify the requirements of the sampling theorem to keep aliasing errors small. Typically, this means that band separations cannot exceed half of the full width at half-peak response. Further work is needed to examine higher order effects, which may well be significant.

  9. Evaluating Health Outcomes of Criminal Justice Populations Using Record Linkage: The Importance of Aliases

    ERIC Educational Resources Information Center

    Larney, Sarah; Burns, Lucy

    2011-01-01

    Individuals in contact with the criminal justice system are a key population of concern to public health. Record linkage studies can be useful for studying health outcomes for this group, but the use of aliases complicates the process of linking records across databases. This study was undertaken to determine the impact of aliases on sensitivity…

  10. Cartographic symbol library considering symbol relations based on anti-aliasing graphic library

    NASA Astrophysics Data System (ADS)

    Mei, Yang; Li, Lin

    2007-06-01

    Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.

  11. Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Radhakrishnan, Senthilkumaran

    2012-01-01

    High-fidelity models of plume-regolith interaction are difficult to develop because of the widely disparate flow conditions that exist in this process. The gas in the core of a rocket plume can often be modeled as a time-dependent, high-temperature, turbulent, reacting continuum flow. However, due to the vacuum conditions on the lunar surface, the mean molecular path in the outer parts of the plume is too long for the continuum assumption to remain valid. Molecular methods are better suited to model this region of the flow. Finally, granular and multiphase flow models must be employed to describe the dust and debris that are displaced from the surface, as well as how a crater is formed in the regolith. At present, standard commercial CFD (computational fluid dynamics) software is not capable of coupling each of these flow regimes to provide an accurate representation of this flow process, necessitating the development of custom software. This software solves the fluid-flow-governing equations in an Eulerian framework, coupled with the particle transport equations that are solved in a Lagrangian framework. It uses a fourth-order explicit Runge-Kutta scheme for temporal integration, an eighth-order central finite differencing scheme for spatial discretization. The non-linear terms in the governing equations are recast in cubic skew symmetric form to reduce aliasing error. The second derivative viscous terms are computed using eighth-order narrow stencils that provide better diffusion for the highest resolved wave numbers. A fourth-order Lagrange interpolation procedure is used to obtain gas-phase variable values at the particle locations.

  12. Design Considerations for a Dedicated Gravity Recovery Satellite Mission Consisting of Two Pairs of Satellites

    NASA Technical Reports Server (NTRS)

    Wiese, D. N.; Nerem, R. S.; Lemoine, F. G.

    2011-01-01

    Future satellite missions dedicated to measuring time-variable gravity will need to address the concern of temporal aliasing errors; i.e., errors due to high-frequency mass variations. These errors have been shown to be a limiting error source for future missions with improved sensors. One method of reducing them is to fly multiple satellite pairs, thus increasing the sampling frequency of the mission. While one could imagine a system architecture consisting of dozens of satellite pairs, this paper explores the more economically feasible option of optimizing the orbits of two pairs of satellites. While the search space for this problem is infinite by nature, steps have been made to reduce it via proper assumptions regarding some parameters and a large number of numerical simulations exploring appropriate ranges for other parameters. A search space originally consisting of 15 variables is reduced to two variables with the utmost impact on mission performance: the repeat period of both pairs of satellites (shown to be near-optimal when they are equal to each other), as well as the inclination of one of the satellite pairs (the other pair is assumed to be in a polar orbit). To arrive at this conclusion, we assume circular orbits, repeat groundtracks for both pairs of satellites, a 100-km inter-satellite separation distance, and a minimum allowable operational satellite altitude of 290 km based on a projected 10-year mission lifetime. Given the scientific objectives of determining time-variable hydrology, ice mass variations, and ocean bottom pressure signals with higher spatial resolution, we find that an optimal architecture consists of a polar pair of satellites coupled with a pair inclined at 72deg, both in 13-day repeating orbits. This architecture provides a 67% reduction in error over one pair of satellites, in addition to reducing the longitudinal striping to such a level that minimal post-processing is required, permitting a substantial increase in the spatial resolution of the gravity field products. It should be emphasized that given different sets of scientific objectives for the mission, or a different minimum allowable satellite altitude, different architectures might be selected.

  13. A novel aliasing-free subband information fusion approach for wideband sparse spectral estimation

    NASA Astrophysics Data System (ADS)

    Luo, Ji-An; Zhang, Xiao-Ping; Wang, Zhi

    2017-12-01

    Wideband sparse spectral estimation is generally formulated as a multi-dictionary/multi-measurement (MD/MM) problem which can be solved by using group sparsity techniques. In this paper, the MD/MM problem is reformulated as a single sparse indicative vector (SIV) recovery problem at the cost of introducing an additional system error. Thus, the number of unknowns is reduced greatly. We show that the system error can be neglected under certain conditions. We then present a new subband information fusion (SIF) method to estimate the SIV by jointly utilizing all the frequency bins. With orthogonal matching pursuit (OMP) leveraging the binary property of SIV's components, we develop a SIF-OMP algorithm to reconstruct the SIV. The numerical simulations demonstrate the performance of the proposed method.

  14. Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms

    DTIC Science & Technology

    2009-09-01

    spread function In the literature these types of algorithms are sometimes hcluded under the broad umbrella of superresolution . However, in the current...We use one of these patterns to visually demonstrate successful de-aliasing 15. SUBJECT TERMS Image de-aliasing Superresolution Microscanning Image...undersampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution . However, in

  15. Viewing-zone enlargement method for sampled hologram that uses high-order diffraction.

    PubMed

    Mishina, Tomoyuki; Okui, Makoto; Okano, Fumio

    2002-03-10

    We demonstrate a method of enlarging the viewing zone for holography that has holograms with a pixel structure. First, aliasing generated by the sampling of a hologram by pixel is described. Next the high-order diffracted beams reproduced from the hologram that contains aliasing are explained. Finally, we show that the viewing zone can be enlarged by combining these high-order reconstructed beams from the hologram with aliasing.

  16. In-situ Chemical Exploration and Mapping using an Autonomous Underwater Vehicle

    NASA Astrophysics Data System (ADS)

    Camilli, R.; Bingham, B. S.; Jakuba, M.; Whelan, J.; Singh, H.; Whiticar, M.

    2004-12-01

    Recent advances in in-situ chemical sensing have emphasized several issues associated with making reliable chemical measurements in the ocean. Such measurements are often aliased temporally and or spatially, and may suffer from instrumentation artifacts, such as slow response time, limited dynamic range, hysteresis, and environmental sensitivities (eg., temperature and pressure). We focus on the in-situ measurement of light hydrocarbons. Specifically we examine data collected using a number of methods including: a vertical profiler, autonomous underwater vehicles (AUV) surveys, and adaptive spatio-temporal survey techniques. We present data collected using a commercial METS sensor on a vertical profiler to identify and map structures associated with ocean bottom methane sources in the Saanich inlet off Vancouver, Canada. This sensor was deployed in parallel with a submersible mass spectrometer and a shipboard equilibrator-gas chromatograph. Our results illustrate that spatial offsets as small as centimeters can produce significant differences in measured concentration. In addition, differences in response times between instruments can also alias the measurements. The results of this preliminary experiment underscore the challenges of quantifying ocean chemical processes with small-scale spatial variability and temporal variability that is often faster than the response times of many available instruments. We explore the capabilities and current limitations of autonomous underwater vehicles for extending the spatial coverage of new in-situ sensor technologies. We present data collected from deployments of Seabed, a passively stable, hover capable AUV, at large-scale gas blowout features located along the U.S. Atlantic margin. Although these deployments successfully revealed previously unobservable oceanographic processes, temporal aliasing caused by sensor response as well as tidal variability manifests itself, illustrating the possibilities for misinterpretation of localized periodic anomalies. Finally we present results of recent experimental chemical plume mapping surveys that were conducted off the coast of Massachusetts using adaptive behaviors that allow the AUV to optimize its mission plan to autonomously search for chemical anomalies. This adaptive operation is based on coupling the chemical sensor payload within a closed-loop architecture with the vehicle's navigation control system for real-time autonomous data assimilation and decision making processes. This allows the vehicle to autonomously refine the search strategy, thereby improving feature localization capabilities and enabling surveys at an appropriate temporal and spatial resolution.

  17. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, L.; Wang, G.; Wessel, P.

    2017-12-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3cm×3cm) to handprint (e.g., 10cm×10cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain portable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem and implement an anti-aliasing procedure of regridding dense TLS data. The TLS data collected in the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as anti-aliasing filters. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  18. RADIAL VELOCITY PLANETS DE-ALIASED: A NEW, SHORT PERIOD FOR SUPER-EARTH 55 Cnc e

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Rebekah I.; Fabrycky, Daniel C., E-mail: rdawson@cfa.harvard.ed, E-mail: daniel.fabrycky@gmail.co

    2010-10-10

    Radial velocity measurements of stellar reflex motion have revealed many extrasolar planets, but gaps in the observations produce aliases, spurious frequencies that are frequently confused with the planets' orbital frequencies. In the case of Gl 581 d, the distinction between an alias and the true frequency was the distinction between a frozen, dead planet and a planet possibly hospitable to life. To improve the characterization of planetary systems, we describe how aliases originate and present a new approach for distinguishing between orbital frequencies and their aliases. Our approach harnesses features in the spectral window function to compare the amplitude andmore » phase of predicted aliases with peaks present in the data. We apply it to confirm prior alias distinctions for the planets GJ 876 d and HD 75898 b. We find that the true periods of Gl 581 d and HD 73526 b/c remain ambiguous. We revise the periods of HD 156668 b and 55 Cnc e, which were afflicted by daily aliases. For HD 156668 b, the correct period is 1.2699 days and the minimum mass is (3.1 {+-} 0.4) M{sub +}. For 55 Cnc e, the correct period is 0.7365 days-the shortest of any known planet-and the minimum mass is (8.3 {+-} 0.3) M{sub +}. This revision produces a significantly improved five-planet Keplerian fit for 55 Cnc, and a self-consistent dynamical fit describes the data just as well. As radial velocity techniques push to ever-smaller planets, often found in systems of multiple planets, distinguishing true periods from aliases will become increasingly important.« less

  19. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    PubMed

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.

  20. Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2012-01-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804

  1. Moving microphone arrays to reduce spatial aliasing in the beamforming technique: theoretical background and numerical investigation.

    PubMed

    Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello

    2008-12-01

    This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.

  2. Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument

    NASA Astrophysics Data System (ADS)

    Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory

    2014-10-01

    The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.

  3. The Earth Gravitational Observatory (EGO): Nanosat Constellations For Advanced Gravity Mapping

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Saltman, A.; Bettadpur, S. V.; Nerem, R. S.; Abel, J.

    2017-12-01

    The trend to nanosats for space-based remote sensing is transforming system architectures: fleets of "cellular" craft scanning Earth with exceptional precision and economy. GeoOptics Inc has been selected by NASA to develop a vision for that transition with an initial focus on advanced gravity field mapping. Building on our spaceborne GNSS technology we introduce innovations that will improve gravity mapping roughly tenfold over previous missions at a fraction of the cost. The power of EGO is realized in its N-satellite form where all satellites in a cluster receive dual-frequency crosslinks from all other satellites, yielding N(N-1)/2 independent measurements. Twelve "cells" thus yield 66 independent links. Because the cells form a 2D arc with spacings ranging from 200 km to 3,000 km, EGO senses a wider range of gravity wavelengths and offers greater geometrical observing strength. The benefits are two-fold: Improved time resolution enables observation of sub-seasonal processes, as from hydro-meteorological phenomena; improved measurement quality enhances all gravity solutions. For the GRACE mission, key limitations arise from such spacecraft factors as long-term accelerometer error, attitude knowledge and thermal stability, which are largely independent from cell to cell. Data from a dozen cells reduces their impact by 3x, by the "root-n" averaging effect. Multi-cell closures improve on this further. The many closure paths among 12 cells provide strong constraints to correct for observed range changes not compatible with a gravity source, including accelerometer errors in measuring non-conservative forces. Perhaps more significantly from a science standpoint, system-level estimates with data from diverse orbits can attack the many scientifically limiting sources of temporal aliasing.

  4. Assessment of terrestrial water contributions to polar motion from GRACE and hydrological models

    NASA Astrophysics Data System (ADS)

    Jin, S. G.; Hassan, A. A.; Feng, G. P.

    2012-12-01

    The hydrological contribution to polar motion is a major challenge in explaining the observed geodetic residual of non-atmospheric and non-oceanic excitations since hydrological models have limited input of comprehensive global direct observations. Although global terrestrial water storage (TWS) estimated from the Gravity Recovery and Climate Experiment (GRACE) provides a new opportunity to study the hydrological excitation of polar motion, the GRACE gridded data are subject to the post-processing de-striping algorithm, spatial gridded mapping and filter smoothing effects as well as aliasing errors. In this paper, the hydrological contributions to polar motion are investigated and evaluated at seasonal and intra-seasonal time scales using the recovered degree-2 harmonic coefficients from all GRACE spherical harmonic coefficients and hydrological models data with the same filter smoothing and recovering methods, including the Global Land Data Assimilation Systems (GLDAS) model, Climate Prediction Center (CPC) model, the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis products and European Center for Medium-Range Weather Forecasts (ECMWF) operational model (opECMWF). It is shown that GRACE is better in explaining the geodetic residual of non-atmospheric and non-oceanic polar motion excitations at the annual period, while the models give worse estimates with a larger phase shift or amplitude bias. At the semi-annual period, the GRACE estimates are also generally closer to the geodetic residual, but with some biases in phase or amplitude due mainly to some aliasing errors at near semi-annual period from geophysical models. For periods less than 1-year, the hydrological models and GRACE are generally worse in explaining the intraseasonal polar motion excitations.

  5. Pattern recognition invariant under changes of scale and orientation

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  6. Shape of the ocean surface and implications for the Earth's interior: GEOS-3 results

    NASA Technical Reports Server (NTRS)

    Chapman, M. E.; Talwani, M.; Kahle, H.; Bodine, J. H.

    1979-01-01

    A new set of 1 deg x 1 deg mean free air anomalies was used to construct a gravimetric geoid by Stokes' formula for the Indian Ocean. Utilizing such 1 deg x 1 deg geoid comparisons were made with GEOS-3 radar altimeter estimates of geoid height. Most commonly there were constant offsets and long wavelength discrepancies between the two data sets; there were many probable causes including radial orbit error, scale errors in the geoid, or bias errors in altitude determination. Across the Aleutian Trench the 1 deg x 1 deg gravimetric geoids did not measure the entire depth of the geoid anomaly due to averaging over 1 deg squares and subsequent aliasing of the data. After adjustment of GEOS-3 data to eliminate long wavelength discrepancies, agreement between the altimeter geoid and gravimetric geoid was between 1.7 and 2.7 meters in rms errors. For purposes of geological interpretation, techniques were developed to directly compute the geoid anomaly over models of density within the Earth. In observing the results from satellite altimetry it was possible to identify geoid anomalies over different geologic features in the ocean. Examples and significant results are reported.

  7. Anti-aliasing Wiener filtering for wave-front reconstruction in the spatial-frequency domain for high-order astronomical adaptive-optics systems.

    PubMed

    Correia, Carlos M; Teixeira, Joel

    2014-12-01

    Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.

  8. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, Lin.; Wang, Guoquan; Wessel, Paul

    2017-03-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3 cm×3 cm) to handprint (e.g., 10 cm×10 cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain manageable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing caused by downsampling have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem of regridding dense TLS data. The TLS data collected from the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as an anti-aliasing filter. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with two different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  9. Characterization and Reduction of Cardiac- and Respiratory-Induced Noise as a Function of the Sampling Rate (TR) in fMRI

    PubMed Central

    Cordes, Dietmar; Nandy, Rajesh R.; Schafer, Scott; Wager, Tor D.

    2014-01-01

    It has recently been shown that both high-frequency and low-frequency cardiac and respiratory noise sources exist throughout the entire brain and can cause significant signal changes in fMRI data. It is also known that the brainstem, basal forebrain and spinal cord area are problematic for fMRI because of the magnitude of cardiac-induced pulsations at these locations. In this study, the physiological noise contributions in the lower brain areas (covering the brainstem and adjacent regions) are investigated and a novel method is presented for computing both low-frequency and high-frequency physiological regressors accurately for each subject. In particular, using a novel optimization algorithm that penalizes curvature (i.e. the second derivative) of the physiological hemodynamic response functions, the cardiac -and respiratory-related response functions are computed. The physiological noise variance is determined for each voxel and the frequency-aliasing property of the high-frequency cardiac waveform as a function of the repetition time (TR) is investigated. It is shown that for the brainstem and other brain areas associated with large pulsations of the cardiac rate, the temporal SNR associated with the low-frequency range of the BOLD response has maxima at subject-specific TRs. At these values, the high-frequency aliased cardiac rate can be eliminated by digital filtering without affecting the BOLD-related signal. PMID:24355483

  10. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    PubMed

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  11. A 24-ch Phased-Array System for Hyperpolarized Helium Gas Parallel MRI to Evaluate Lung Functions.

    PubMed

    Lee, Ray; Johnson, Glyn; Stefanescu, Cornel; Trampel, Robert; McGuinness, Georgeann; Stoeckel, Bernd

    2005-01-01

    Hyperpolarized 3He gas MRI has a serious potential for assessing pulmonary functions. Due to the fact that the non-equilibrium of the gas results in a steady depletion of the signal level over the course of the excitations, the signal-tonoise ratio (SNR) can be independent of the number of the data acquisitions under certain circumstances. This provides a unique opportunity for parallel MRI for gaining both temporal and spatial resolution without reducing SNR. We have built a 24-channel receive / 2-channel transmit phased array system for 3He parallel imaging. Our in vivo experimental results proved that the significant temporal and spatial resolution can be gained at no cost to the SNR. With 3D data acquisition, eight fold (2x4) scan time reduction can be achieved without any aliasing in images. Additionally, a rigid analysis using the low impedance preamplifier for decoupling presented evidence of strong coupling.

  12. Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet

    NASA Astrophysics Data System (ADS)

    Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark

    2017-11-01

    Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.

  13. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  14. Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.

    PubMed

    Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J

    2017-02-01

    Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  16. Fourier Theory Explanation for the Sampling Theorem Demonstrated by a Laboratory Experiment.

    ERIC Educational Resources Information Center

    Sharma, A.; And Others

    1996-01-01

    Describes a simple experiment that uses a CCD video camera, a display monitor, and a laser-printed bar pattern to illustrate signal sampling problems that produce aliasing or moiri fringes in images. Uses the Fourier transform to provide an appropriate and elegant means to explain the sampling theorem and the aliasing phenomenon in CCD-based…

  17. On the choice of orbits for an altimetric satellite to study ocean circulation and tides

    NASA Technical Reports Server (NTRS)

    Parke, Michael E.; Stewart, Robert H.; Farless, David L.; Cartwright, David E.

    1987-01-01

    The choice of an orbit for satellite altimetric studies of the ocean's circulation and tides requires an understanding of the orbital characteristics that influence the accuracy of the satellite's measurements of sea level and the temporal and spatial distribution of the measurements. The orbital characteristics that influence accurate calculations of the satellite's position as a function of time are examined, and the pattern of ground tracks laid down on the ocean's surface as a function of the satellite's altitude and inclination is studied. The results are used to examine the aliases in the measurements of surface geostrophic currents and tides. Finally, these considerations are used to specify possible orbits that may be useful for the upcoming Topex/Poseidon mission.

  18. 78 FR 69927 - In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... DEPARTMENT OF STATE [Public Notice 8527] In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Record...

  19. 75 FR 28849 - Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... DEPARTMENT OF STATE [Public Notice 7026] Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Records assembled in these...

  20. Anti-aliasing filter design on spaceborne digital receiver

    NASA Astrophysics Data System (ADS)

    Yu, Danru; Zhao, Chonghui

    2009-12-01

    In recent years, with the development of satellite observation technologies, more and more active remote sensing technologies are adopted in spaceborne system. The spaceborne precipitation radar will depend heavily on high performance digital processing to collect meaningful rain echo data. It will increase the complexity of the spaceborne system and need high-performance and reliable digital receiver. This paper analyzes the frequency aliasing in the intermediate frequency signal sampling of digital down conversion in spaceborne radar, and gives an effective digital filter. By analysis and calculation, we choose reasonable parameters of the half-band filters to suppress the frequency aliasing on DDC. Compared with traditional filter, the FPGA resources cost in our system are reduced by over 50%. This can effectively reduce the complexity in the spaceborne digital receiver and improve the reliability of system.

  1. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  2. A Deep Analysis of Center Displacement in An Idealized Tropical Cyclone with Low-wavenumber Asymmetries

    NASA Astrophysics Data System (ADS)

    Zhao, C.; Song, J.; Leng, H.

    2017-12-01

    The Tropical Cyclone(TC) center-finding technique plays an important role when diagnostic analyses of TC structure are performed, especially when dealing with low-wavenumber asymmetries. Previous works have already established that structure of TCs can vary greatly depending on the displacement induced by center-finding techniques. As it is difficult to define a true TC center in the real world, this work seeks to explore how low-wavenumber azimuthal Fourier analyses can vary with center displacement using idealized, parametric TC-like vortices with different perturbation structures. It is shown that the errors is sensitive to the location and radial structure of the adding perturbation. In the case of adding azimuthal wavenumber 1 and 3 asymmetries, the increasing radial shear of initial asymmetries will enhance the corresponding spectral energy around radius of maximum wind(RMW) significantly, and they also have a great effect on spectral energy of wavenumber 2. On the contrary, the wavenumber 2 cases show a reduction from 1RMW to outer radius when shear is increasing and has little effect on spectral energy of wavenumber 1 or 2. Pervious findings indicated that the aliasing is dependent on the placement of center relative to the location of the asymmetries, which is also valid in these shearing situations. Moreover, it is found that this aliasing caused by phase displacement is less sensitive with the radial shear in wavenumber 2 and 3 cases, while it shows an significant amplification and deformation when wavenumber 1 asymmetry is added.

  3. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  4. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  5. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  6. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  7. Infrared Sensor Readout Design

    DTIC Science & Technology

    1975-11-01

    Line Replaceable Unit LT Level Translator MRT Minimum Resolvable Temperature MTF Modulation Transfer Function PC Printed Circuit SCCCD Surface...reduced, not only will the aliased noise increase, but signal aliasing will also start to occur. Atlbe display level this means that sharp edges could...converted from a quantity ol charge to a voltage- level shift by the action ol the precharge pulse that presets the potential on the output diode node to

  8. Staggered Multiple-PRF Ultrafast Color Doppler.

    PubMed

    Posada, Daniel; Poree, Jonathan; Pellissier, Arnaud; Chayer, Boris; Tournoux, Francois; Cloutier, Guy; Garcia, Damien

    2016-06-01

    Color Doppler imaging is an established pulsed ultrasound technique to visualize blood flow non-invasively. High-frame-rate (ultrafast) color Doppler, by emissions of plane or circular wavefronts, allows severalfold increase in frame rates. Conventional and ultrafast color Doppler are both limited by the range-velocity dilemma, which may result in velocity folding (aliasing) for large depths and/or large velocities. We investigated multiple pulse-repetition-frequency (PRF) emissions arranged in a series of staggered intervals to remove aliasing in ultrafast color Doppler. Staggered PRF is an emission process where time delays between successive pulse transmissions change in an alternating way. We tested staggered dual- and triple-PRF ultrafast color Doppler, 1) in vitro in a spinning disc and a free jet flow, and 2) in vivo in a human left ventricle. The in vitro results showed that the Nyquist velocity could be extended to up to 6 times the conventional limit. We found coefficients of determination r(2) ≥ 0.98 between the de-aliased and ground-truth velocities. Consistent de-aliased Doppler images were also obtained in the human left heart. Our results demonstrate that staggered multiple-PRF ultrafast color Doppler is efficient for high-velocity high-frame-rate blood flow imaging. This is particularly relevant for new developments in ultrasound imaging relying on accurate velocity measurements.

  9. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  10. DNS load balancing in the CERN cloud

    NASA Astrophysics Data System (ADS)

    Reguero Naredo, Ignacio; Lobato Pardavila, Lorena

    2017-10-01

    Load Balancing is one of the technologies enabling deployment of large-scale applications on cloud resources. A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence. It currently serves over 450 load-balanced aliases with two small VMs acting as master and slave. The aliases are mapped to DNS subdomains. These subdomains are managed with DDNS according to a load metric, which is collected from the alias member nodes with SNMP. During the last years, several improvements were brought to the software, for instance: support for IPv6, parallelization of the status requests, implementing the client in Python to allow for multiple aliases with differentiated states on the same machine or support for application state. The configuration of the Load Balancer is currently managed by a Puppet type. It discovers the alias member nodes and gets the alias definitions from the Ermis REST service. The Aiermis self-service GUI for the management of the LB aliases has been produced and is based on the Ermis service above that implements a form of Load Balancing as a Service (LBaaS). The Ermis REST API has authorisation based in Foreman hostgroups. The CERN DNS LBD is Open Software with Apache 2 license.

  11. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  12. Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm

    DOE PAGES

    Huang, C. -K.; Zeng, Y.; Wang, Y.; ...

    2016-10-01

    The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less

  13. Atmospheric Pressure Corrections in Geodesy and Oceanography: a Strategy for Handling Air Tides

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.; Ray, Richard D.

    2003-01-01

    Global pressure data are often needed for processing or interpreting modern geodetic and oceanographic measurements. The most common source of these data is the analysis or reanalysis products of various meteorological centers. Tidal signals in these products can be problematic for several reasons, including potentially aliased sampling of the semidiurnal solar tide as well as the presence of various modeling or timing errors. Building on the work of Van den Dool and colleagues, we lay out a strategy for handling atmospheric tides in (re)analysis data. The procedure also offers a method to account for ocean loading corrections in satellite altimeter data that are consistent with standard ocean-tide corrections. The proposed strategy has immediate application to the on-going Jason-1 and GRACE satellite missions.

  14. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  15. Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C. -K.; Zeng, Y.; Wang, Y.

    The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less

  16. Post-Fisherian Experimentation: From Physical to Virtual

    DOE PAGES

    Jeff Wu, C. F.

    2014-04-24

    Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.

  17. Determining Aliasing in Isolated Signal Conditioning Modules

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The basic concept of aliasing is this: Converting analog data into digital data requires sampling the signal at a specific rate, known as the sampling frequency. The result of this conversion process is a new function, which is a sequence of digital samples. This new function has a frequency spectrum, which contains all the frequency components of the original signal. The Fourier transform mathematics of this process show that the frequency spectrum of the sequence of digital samples consists of the original signal s frequency spectrum plus the spectrum shifted by all the harmonics of the sampling frequency. If the original analog signal is sampled in the conversion process at a minimum of twice the highest frequency component contained in the analog signal, and if the reconstruction process is limited to the highest frequency of the original signal, then the reconstructed signal accurately duplicates the original analog signal. It is this process that can give birth to aliasing.

  18. Simulation of sampling effects in FPAs

    NASA Astrophysics Data System (ADS)

    Cook, Thomas H.; Hall, Charles S.; Smith, Frederick G.; Rogne, Timothy J.

    1991-09-01

    The use of multiplexers and large focal plane arrays in advanced thermal imaging systems has drawn renewed attention to sampling and aliasing issues in imaging applications. As evidenced by discussions in a recent workshop, there is no clear consensus among experts whether aliasing in sensor designs can be readily tolerated, or must be avoided at all cost. Further, there is no straightforward, analytical method that can answer the question, particularly when considering image interpreters as different as humans and autonomous target recognizers (ATR). However, the means exist for investigating sampling and aliasing issues through computer simulation. The U.S. Army Tank-Automotive Command (TACOM) Thermal Image Model (TTIM) provides realistic sensor imagery that can be evaluated by both human observers and TRs. This paper briefly describes the history and current status of TTIM, explains the simulation of FPA sampling effects, presents validation results of the FPA sensor model, and demonstrates the utility of TTIM for investigating sampling effects in imagery.

  19. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  20. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  1. A new unified approach to determine geocentre motion using space geodetic and GRACE gravity data

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Kusche, Jürgen; Landerer, Felix W.

    2017-06-01

    Geocentre motion between the centre-of-mass of the Earth system and the centre-of-figure of the solid Earth surface is a critical signature of degree-1 components of global surface mass transport process that includes sea level rise, ice mass imbalance and continental-scale hydrological change. To complement GRACE data for complete-spectrum mass transport monitoring, geocentre motion needs to be measured accurately. However, current methods of geodetic translational approach and global inversions of various combinations of geodetic deformation, simulated ocean bottom pressure and GRACE data contain substantial biases and systematic errors. Here, we demonstrate a new and more reliable unified approach to geocentre motion determination using a recently formed satellite laser ranging based geocentric displacement time-series of an expanded geodetic network of all four space geodetic techniques and GRACE gravity data. The unified approach exploits both translational and deformational signatures of the displacement data, while the addition of GRACE's near global coverage significantly reduces biases found in the translational approach and spectral aliasing errors in the inversion.

  2. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  3. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  4. GRACE AOD1B Product Release 06: Long-Term Consistency and the Treatment of Atmospheric Tides

    NASA Astrophysics Data System (ADS)

    Dobslaw, Henryk; Bergmann-Wolf, Inga; Dill, Robert; Poropat, Lea; Flechtner, Frank

    2017-04-01

    The GRACE satellites orbiting the Earth at very low altitudes are affected by rapid changes in the Earth's gravity field caused by mass redistribution in atmosphere and oceans. To avoid temporal aliasing of such high-frequency variability into the final monthly-mean gravity fields, those effects are typically modelled during the numerical orbit integration by appling the 6-hourly GRACE Atmosphere and Ocean De-Aliasing Level-1B (AOD1B) a priori model. In preparation of the next GRACE gravity field re-processing currently performed by the GRACE Science Data System, a new version of AOD1B has been calculated. The data-set is based on 3-hourly surface pressure anomalies from ECMWF that have been mapped to a common reference orography by means of ECMWF's mean sea-level pressure diagnostic. Atmospheric tides as well as the corresponding oceanic response at the S1, S2, S3, and L2 frequencies and its annual modulations have been fitted and removed in order to retain the non-tidal variability only. The data-set is expanded into spherical harmonics complete up to degree and order 180. In this contribution, we will demonstrate that AOD1B RL06 is now free from spurious jumps in the time-series related to occasional changes in ECMWF's operational numerical weather prediction system. We will also highlight the rationale for separating tidal signals from the AOD1B coefficients, and will finally discuss the current quality of the AOD1B forecasts that have been introduced very recently for GRACE quicklook or near-realtime applications.

  5. Recent Hydrologic Developments in the SWOT Mission

    NASA Astrophysics Data System (ADS)

    Alsdorf, D. E.; Mognard, N. M.; Cretaux, J.; Calmant, S.; Lettenmaier, D. P.; Rodriguez, E.

    2012-12-01

    The Surface Water and Ocean Topography satellite mission (SWOT) is designed to measure the elevations of the world's water surfaces including both terrestrial surface waters and the oceans. CNES, NASA, and the CSA are partners in the mission as are hydrologists, oceanographers, and an international engineering team. Recent hydrologic and mission related advances include the following. (1) An airborne version of SWOT called AirSWOT has been developed to provide calibration and validation for the mission when on orbit as well as to support science and technology during mission development. AirSWOT flights are in the planning stage. (2) In early 2012, NASA and CNES issued calls for proposals to participate in the forthcoming SWOT Science Definition Team. Results are expected in time for a Fall 2012 start of the SDT. (3) A workshop held in June 2012 addressed the problem of estimating river discharge from SWOT measurements. SWOT discharge estimates will be developed for river reaches rather than individual cross-sections. Errors will result from algorithm unknowns of bathymetry and roughness, from errors in SWOT measurements of water surface height and inundation, from the incomplete temporal record dictated by the SWOT orbit, and from fluvial features such as unmeasured inflows and outflows within the reach used to estimate discharge. To overcome these issues, in-situ and airborne field data are required in order to validate and refine algorithms. (4) Two modeling methods are using the Amazon Basin as a test case for demonstrating the utility of SWOT observables for constraining water balances. In one case, parameters used to minimize differences between SWOT and model water surface elevations should be adjusted locally in space and time. In the other case, using actual altimetry data as a proxy for SWOT's water surface elevations, it was determined that model water surface elevations were less than 1.6m different from the altimetry measurements: a considerable match given the lack of channel bathymetric knowledge. (5) The influence of the world's managed reservoirs on the water cycle is difficult to assess given the abundance of dams and the relative lack of water level or storage change information. The downstream impacts, particularly for transboundary rivers, are similarly difficult to determine. The challenges for SWOT to overcome this lack hinge on the temporal sampling dictated by the mission's orbital repeat cycle, on the accuracy of the height measurements, on the surface area, and on topography causing radar layover. (6) While SWOT's orbit is designed to minimize errors from tidal aliasing, orbital sub-cycles can be adjusted to minimize hydrological errors. The impact of theses sub-cycles has been assessed using a hydrodynamic modeling of the last 1000 km reach of the Ob River, a West Siberian river draining a total area of around 3 million km2. Using a local ensemble Kalman smoother to assimilate virtual SWOT observations, similar results have been obtained for either a 1-day or 3-day sub-cycle when decreasing the differences between "true" and modeled water elevations. A key result is the necessity of using the smoother in the assimilation, at least for large rivers like the Ob.

  6. Aspects of spatial and temporal aggregation in estimating regional carbon dioxide fluxes from temperate forest soils

    NASA Technical Reports Server (NTRS)

    Kicklighter, David W.; Melillo, Jerry M.; Peterjohn, William T.; Rastetter, Edward B.; Mcguire, A. David; Steudler, Paul A.; Aber, John D.

    1994-01-01

    We examine the influence of aggregation errors on developing estimates of regional soil-CO2 flux from temperate forests. We find daily soil-CO2 fluxes to be more sensitive to changes in soil temperatures (Q(sub 10) = 3.08) than air temperatures (Q(sub 10) = 1.99). The direct use of mean monthly air temperatures with a daily flux model underestimates regional fluxes by approximately 4%. Temporal aggregation error varies with spatial resolution. Overall, our calibrated modeling approach reduces spatial aggregation error by 9.3% and temporal aggregation error by 15.5%. After minimizing spatial and temporal aggregation errors, mature temperate forest soils are estimated to contribute 12.9 Pg C/yr to the atmosphere as carbon dioxide. Georeferenced model estimates agree well with annual soil-CO2 fluxes measured during chamber studies in mature temperate forest stands around the globe.

  7. Graphics processing unit (GPU) real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  8. Event Compression Using Recursive Least Squares Signal Processing.

    DTIC Science & Technology

    1980-07-01

    decimation of the Burstl signal with and without all-pole prefiltering to reduce aliasing . Figures 3.32a-c and 3.33a-c show the same examples but with 4/1...to reduce aliasing , w~t found that it did not improve the quality of the event compressed signals . If filtering must be performed, all-pole filtering...A-AO89 785 MASSACHUSETTS IN T OF TECH CAMBRIDGE RESEARCH LAB OF--ETC F/B 17/9 EVENT COMPRESSION USING RECURSIVE LEAST SQUARES SIGNAL PROCESSI-ETC(t

  9. Entropy of space-time outcome in a movement speed-accuracy task.

    PubMed

    Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M

    2015-12-01

    The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  11. Some aspects of simultaneously flying Topex Follow-On in a Topex orbit with Geosat Follow-On in a Geosat orbit

    NASA Technical Reports Server (NTRS)

    Parke, Michael E.; Born, George; Mclaughlin, Craig

    1994-01-01

    The advantages of having Geosat Follow-On in a Geosat orbit flying simultaneously with Topex Follow-On in a Topex/Poseidon orbit are examined. The orbits are evaluated using two criteria. The first is the acute crossover angle. This angle should be at least 40 degrees in order to accurately resolve the slope of sea level at crossover locations. The second is tidal aliasing. In order to solve for tides, the largest constituents should not be aliased to a frequency lower than two cycles/year and should be at least one cycle discrete from one another and from exactly two cycles/year over the mission life. The results show that TFO and GFO in these orbits complement each other. Both satellites have large crossover angles over a wide latitude range. In addition, the Topex orbit has good aliasing characteristics for the M2 and P1 tides for which the Geosat orbit has difficulty.

  12. Harmonic analysis of electrified railway based on improved HHT

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    In this paper, the causes and harms of the current electric locomotive electrical system harmonics are firstly studied and analyzed. Based on the characteristics of the harmonics in the electrical system, the Hilbert-Huang transform method is introduced. Based on the in-depth analysis of the empirical mode decomposition method and the Hilbert transform method, the reasons and solutions to the endpoint effect and modal aliasing problem in the HHT method are explored. For the endpoint effect of HHT, this paper uses point-symmetric extension method to extend the collected data; In allusion to the modal aliasing problem, this paper uses the high frequency harmonic assistant method to preprocess the signal and gives the empirical formula of high frequency auxiliary harmonic. Finally, combining the suppression of HHT endpoint effect and modal aliasing problem, an improved HHT method is proposed and simulated by matlab. The simulation results show that the improved HHT is effective for the electric locomotive power supply system.

  13. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  14. Demonstrating the Value of Fine-resolution Optical Data for Minimising Aliasing Impacts on Biogeochemical Models of Surface Waters

    NASA Astrophysics Data System (ADS)

    Chappell, N. A.; Jones, T.; Young, P.; Krishnaswamy, J.

    2015-12-01

    There is increasing awareness that under-sampling may have resulted in the omission of important physicochemical information present in water quality signatures of surface waters - thereby affecting interpretation of biogeochemical processes. For dissolved organic carbon (DOC) and nitrogen this under-sampling can now be avoided using UV-visible spectroscopy measured in-situ and continuously at a fine-resolution e.g. 15 minutes ("real time"). Few methods are available to extract biogeochemical process information directly from such high-frequency data. Jones, Chappell & Tych (2014 Environ Sci Technol: 13289-97) developed one such method using optically-derived DOC data based upon a sophisticated time-series modelling tool. Within this presentation we extend the methodology to quantify the minimum sampling interval required to avoid distortion of model structures and parameters that describe fundamental biogeochemical processes. This shifting of parameters which results from under-sampling is called "aliasing". We demonstrate that storm dynamics at a variety of sites dominate over diurnal and seasonal changes and that these must be characterised by sampling that may be sub-hourly to avoid aliasing. This is considerably shorter than that used by other water quality studies examining aliasing (e.g. Kirchner 2005 Phys Rev: 069902). The modelling approach presented is being developed into a generic tool to calculate the minimum sampling for water quality monitoring in systems driven primarily by hydrology. This is illustrated with fine-resolution, optical data from watersheds in temperate Europe through to the humid tropics.

  15. Simultaneous Multi-Slice fMRI using Spiral Trajectories

    PubMed Central

    Zahneisen, Benjamin; Poser, Benedikt A.; Ernst, Thomas; Stenger, V. Andrew

    2014-01-01

    Parallel imaging methods using multi-coil receiver arrays have been shown to be effective for increasing MRI acquisition speed. However parallel imaging methods for fMRI with 2D sequences show only limited improvements in temporal resolution because of the long echo times needed for BOLD contrast. Recently, Simultaneous Multi-Slice (SMS) imaging techniques have been shown to increase fMRI temporal resolution by factors of four and higher. In SMS fMRI multiple slices can be acquired simultaneously using Echo Planar Imaging (EPI) and the overlapping slices are un-aliased using a parallel imaging reconstruction with multiple receivers. The slice separation can be further improved using the “blipped-CAIPI” EPI sequence that provides a more efficient sampling of the SMS 3D k-space. In this paper a blipped-spiral SMS sequence for ultra-fast fMRI is presented. The blipped-spiral sequence combines the sampling efficiency of spiral trajectories with the SMS encoding concept used in blipped-CAIPI EPI. We show that blipped spiral acquisition can achieve almost whole brain coverage at 3 mm isotropic resolution in 168 ms. It is also demonstrated that the high temporal resolution allows for dynamic BOLD lag time measurement using visual/motor and retinotopic mapping paradigms. The local BOLD lag time within the visual cortex following the retinotopic mapping stimulation of expanding flickering rings is directly measured and easily translated into an eccentricity map of the cortex. PMID:24518259

  16. Observational filter for limb sounders applied to convective gravity waves

    NASA Astrophysics Data System (ADS)

    Trinh, Quang Thai; Preusse, Peter; Riese, Martin; Kalisch, Silvio

    Gravity waves (GWs) play a key role in the dynamics of the middle atmosphere. In the current work, simulated spectral distribution in term of horizontal and vertical wavenumber of GW momentum flux (GWMF) is analysed by applying an accurate observational filter, which consider sensitivity and sampling geometry of satellite instruments. For this purpose, GWs are simulated for January 2008 by coupling GROGRAT (gravity wave regional or global ray tracer) and ray-based spectral parameterization of convective gravity wave drag (CGWD). Atmospheric background is taken from MERRA (Modern-Era Retrospective Analysis For Research And Applications) data. GW spectra of different spatial and temporal scales from parameterization of CGWD (MF1, MF2, MF3) at 25 km altitude are considered. The observational filter contains the following elements: determination of the wavelength along the line of sight, application of the visibility filter from Preusse et al, JGR, 2002, determination of the along-track wavelength, and aliasing correction as well as correction of GWMF due to larger horizontal wavelength along-track. Sensitivity and sampling geometries of the SABER (Sounding of the Atmosphere using Broadband Emission Radiometry) and HIRDLS (High Resolution Dynamics Limb Sounder) are simulated. Results show that all spectra are shifted to the direction of longer horizontal and vertical wavelength after applying the observational filter. Spectrum MF1 is most influenced and MF3 is least influenced by this filter. Part of the spectra, related to short horizontal wavelength, is cut off and flipped to the part of longer horizontal wavelength by aliasing. Sampling geometry of HIRDLS allows to see a larger part of the spectrum thanks to shorter sampling profile distance. A better vertical resolution of the HIRDLS instrument also helps to increase its sensitivity.

  17. Observational filter for limb sounders applied to convective gravity waves

    NASA Astrophysics Data System (ADS)

    Trinh, Thai; Kalisch, Silvio; Preusse, Peter; Riese, Martin

    2014-05-01

    Gravity waves (GWs) play a key role in the dynamics of the middle atmosphere. In the current work, simulated spectral distribution in term of horizontal and vertical wavenumber of GW momentum flux (GWMF) is analysed by applying an accurate observational filter, which consider sensitivity and sampling geometry of satellite instruments. For this purpose, GWs are simulated for January 2008 by coupling GROGRAT (gravity wave regional or global ray tracer) and ray-based spectral parameterization of convective gravity wave drag (CGWD). Atmospheric background is taken from MERRA (Modern-Era Retrospective Analysis For Research And Applications) data. GW spectra of different spatial and temporal scales from parameterization of CGWD (MF1, MF2, MF3) at 25 km altitude are considered. The observational filter contains the following elements: determination of the wavelength along the line of sight, application of the visibility filter from Preusse et al, JGR, 2002, determination of the along-track wavelength, and aliasing correction as well as correction of GWMF due to larger horizontal wavelength along-track. Sensitivity and sampling geometries of the SABER (Sounding of the Atmosphere using Broadband Emission Radiometry) and HIRDLS (High Resolution Dynamics Limb Sounder) are simulated. Results show that all spectra are shifted to the direction of longer horizontal and vertical wavelength after applying the observational filter. Spectrum MF1 is most influenced and MF3 is least influenced by this filter. Part of the spectra, related to short horizontal wavelength, is cut off and flipped to the part of longer horizontal wavelength by aliasing. Sampling geometry of HIRDLS allows to see a larger part of the spectrum thanks to shorter sampling profile distance. A better vertical resolution of the HIRDLS instrument also helps to increase its sensitivity.

  18. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  19. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  20. Spatial and temporal variability of the overall error of National Atmospheric Deposition Program measurements determined by the USGS collocated-sampler program, water years 1989-2001

    USGS Publications Warehouse

    Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.

    2005-01-01

    Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.

  1. Mascons, GRACE, and Time-variable Gravity

    NASA Technical Reports Server (NTRS)

    Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.

    2006-01-01

    The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.

  2. Entropy of Movement Outcome in Space-Time.

    PubMed

    Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M

    2015-07-01

    Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.

  3. Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.

    PubMed

    Limongi, Roberto; Silva, Angélica M

    2016-11-01

    The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.

  4. a Climatology of Global Precipitation.

    NASA Astrophysics Data System (ADS)

    Legates, David Russell

    A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.

  5. Spectral analysis of highly aliased sea-level signals

    NASA Astrophysics Data System (ADS)

    Ray, Richard D.

    1998-10-01

    Observing high-wavenumber ocean phenomena with a satellite altimeter generally calls for "along-track" analyses of the data: measurements along a repeating satellite ground track are analyzed in a point-by-point fashion, as opposed to spatially averaging data over multiple tracks. The sea-level aliasing problems encountered in such analyses can be especially challenging. For TOPEX/POSEIDON, all signals with frequency greater than 18 cycles per year (cpy), including both tidal and subdiurnal signals, are folded into the 0-18 cpy band. Because the tidal bands are wider than 18 cpy, residual tidal cusp energy, plus any subdiurnal energy, is capable of corrupting any low-frequency signal of interest. The practical consequences of this are explored here by using real sea-level measurements from conventional tide gauges, for which the true oceanographic spectrum is known and to which a simulated "satellite-measured" spectrum, based on coarsely subsampled data, may be compared. At many locations the spectrum is sufficently red that interannual frequencies remain unaffected. Intra-annual frequencies, however, must be interpreted with greater caution, and even interannual frequencies can be corrupted if the spectrum is flat. The results also suggest that whenever tides must be estimated directly from the altimetry, response methods of analysis are preferable to harmonic methods, even in nonlinear regimes; this will remain so for the foreseeable future. We concentrate on three example tide gauges: two coastal stations on the Malay Peninsula where the closely aliased K1 and Ssa tides are strong and at Canton Island where trapped equatorial waves are aliased.

  6. Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.

    PubMed

    Maij, Femke; Wing, Alan M; Medendorp, W Pieter

    2013-12-01

    It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.

  7. Blending of phased array data

    NASA Astrophysics Data System (ADS)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  8. Identifying technical aliases in SELDI mass spectra of complex mixtures of proteins

    PubMed Central

    2013-01-01

    Background Biomarker discovery datasets created using mass spectrum protein profiling of complex mixtures of proteins contain many peaks that represent the same protein with different charge states. Correlated variables such as these can confound the statistical analyses of proteomic data. Previously we developed an algorithm that clustered mass spectrum peaks that were biologically or technically correlated. Here we demonstrate an algorithm that clusters correlated technical aliases only. Results In this paper, we propose a preprocessing algorithm that can be used for grouping technical aliases in mass spectrometry protein profiling data. The stringency of the variance allowed for clustering is customizable, thereby affecting the number of peaks that are clustered. Subsequent analysis of the clusters, instead of individual peaks, helps reduce difficulties associated with technically-correlated data, and can aid more efficient biomarker identification. Conclusions This software can be used to pre-process and thereby decrease the complexity of protein profiling proteomics data, thus simplifying the subsequent analysis of biomarkers by decreasing the number of tests. The software is also a practical tool for identifying which features to investigate further by purification, identification and confirmation. PMID:24010718

  9. Super-resolution for imagery from integrated microgrid polarimeters.

    PubMed

    Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M

    2011-07-04

    Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.

  10. Interpretation of aeromagnetic data over Abeokuta and its environs, Southwest Nigeria, using spectral analysis (Fourier transform technique)

    NASA Astrophysics Data System (ADS)

    Olurin, Oluwaseun T.; Ganiyu, Saheed A.; Hammed, Olaide S.; Aluko, Taiwo J.

    2016-10-01

    This study presents the results of spectral analysis of magnetic data over Abeokuta area, Southwestern Nigeria, using fast Fourier transform (FFT) in Microsoft Excel. The study deals with the quantitative interpretation of airborne magnetic data (Sheet No. 260), which was conducted by the Nigerian Geological Survey Agency in 2009. In order to minimise aliasing error, the aeromagnetic data was gridded at spacing of 1 km. Spectral analysis technique was used to estimate the magnetic basement depth distributed at two levels. The result of the interpretation shows that the magnetic sources are mainly distributed at two levels. The shallow sources (minimum depth) range in depth from 0.103 to 0.278 km below ground level and are inferred to be due to intrusions within the region. The deeper sources (maximum depth) range in depth from 2.739 to 3.325 km below ground and are attributed to the underlying basement.

  11. A simulation for gravity fine structure recovery from high-low GRAVSAT SST data

    NASA Technical Reports Server (NTRS)

    Estes, R. H.; Lancaster, E. R.

    1976-01-01

    Covariance error analysis techniques were applied to investigate estimation strategies for the high-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. Surface density blocks of 5 deg x 5 deg and 2 1/2 deg x 2 1/2 deg resolution were utilized to represent the high order geopotential with the drag-free GRAVSAT configured in a nearly circular polar orbit at 250 km. altitude. GEOPAUSE and geosynchronous satellites were considered as high relay spacecraft. It is demonstrated that knowledge of gravitational fine structure can be significantly improved at 5 deg x 5 deg resolution using SST data from a high-low configuration with reasonably accurate orbits for the low GRAVSAT. The gravity fine structure recoverability of the high-low SST mission is compared with the low-low configuration and shown to be superior.

  12. Scanning wind-vector scatterometers with two pencil beams

    NASA Technical Reports Server (NTRS)

    Kirimoto, T.; Moore, R. K.

    1984-01-01

    A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.

  13. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  14. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  15. Spectral decontamination of a real-time helicopter simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    Nonlinear mathematical models of a rotor system, referred to as rotating blade-element models, produce steady-state, high-frequency harmonics of significant magnitude. In a discrete simulation model, certain of these harmonics may be incompatible with realistic real-time computational constraints because of their aliasing into the operational low-pass region. However, the energy is an aliased harmonic may be suppressed by increasing the computation rate of an isolated, causal nonlinearity and using an appropriate filter. This decontamination technique is applied to Sikorsky's real-time model of the Black Hawk helicopter, as supplied to NASA for handling-qualities investigations.

  16. Surface errors without semantic impairment in acquired dyslexia: a voxel-based lesion–symptom mapping study

    PubMed Central

    Pillay, Sara B.; Humphries, Colin J.; Gross, William L.; Graves, William W.; Book, Diane S.

    2016-01-01

    Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic ‘regularization’ errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as ‘played’), indicating over-reliance on sublexical grapheme–phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion–symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology. PMID:26966139

  17. Comparison of high resolution x-ray detectors with conventional FPDs using experimental MTFs and apodized aperture pixel design for reduced aliasing

    NASA Astrophysics Data System (ADS)

    Shankar, A.; Russ, M.; Vijayan, S.; Bednarek, D. R.; Rudin, S.

    2017-03-01

    Apodized Aperture Pixel (AAP) design, proposed by Ismailova et.al, is an alternative to the conventional pixel design. The advantages of AAP processing with a sinc filter in comparison with using other filters include non-degradation of MTF values and elimination of signal and noise aliasing, resulting in an increased performance at higher frequencies, approaching the Nyquist frequency. If high resolution small field-of-view (FOV) detectors with small pixels used during critical stages of Endovascular Image Guided Interventions (EIGIs) could also be extended to cover a full field-of-view typical of flat panel detectors (FPDs) and made to have larger effective pixels, then methods must be used to preserve the MTF over the frequency range up to the Nyquist frequency of the FPD while minimizing aliasing. In this work, we convolve the experimentally measured MTFs of an Microangiographic Fluoroscope (MAF) detector, (the MAF-CCD with 35μm pixels) and a High Resolution Fluoroscope (HRF) detector (HRF-CMOS50 with 49.5μm pixels) with the AAP filter and show the superiority of the results compared to MTFs resulting from moving average pixel binning and to the MTF of a standard FPD. The effect of using AAP is also shown in the spatial domain, when used to image an infinitely small point object. For detectors in neurovascular interventions, where high resolution is the priority during critical parts of the intervention, but full FOV with larger pixels are needed during less critical parts, AAP design provides an alternative to simple pixel binning while effectively eliminating signal and noise aliasing yet allowing the small FOV high resolution imaging to be maintained during critical parts of the EIGI.

  18. Dynamic change in mitral regurgitant orifice area: comparison of color Doppler echocardiographic and electromagnetic flowmeter-based methods in a chronic animal model.

    PubMed

    Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J

    1995-08-01

    The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.

  19. Influence of running stride frequency in heart rate variability analysis during treadmill exercise testing.

    PubMed

    Bailón, Raquel; Garatachea, Nuria; de la Iglesia, Ignacio; Casajús, Jose Antonio; Laguna, Pablo

    2013-07-01

    The analysis and interpretation of heart rate variability (HRV) during exercise is challenging not only because of the nonstationary nature of exercise, the time-varying mean heart rate, and the fact that respiratory frequency exceeds 0.4 Hz, but there are also other factors, such as the component centered at the pedaling frequency observed in maximal cycling tests, which may confuse the interpretation of HRV analysis. The objectives of this study are to test the hypothesis that a component centered at the running stride frequency (SF) appears in the HRV of subjects during maximal treadmill exercise testing, and to study its influence in the interpretation of the low-frequency (LF) and high-frequency (HF) components of HRV during exercise. The HRV of 23 subjects during maximal treadmill exercise testing is analyzed. The instantaneous power of different HRV components is computed from the smoothed pseudo-Wigner-Ville distribution of the modulating signal assumed to carry information from the autonomic nervous system, which is estimated based on the time-varying integral pulse frequency modulation model. Besides the LF and HF components, the appearance is revealed of a component centered at the running SF as well as its aliases. The power associated with the SF component and its aliases represents 22±7% (median±median absolute deviation) of the total HRV power in all the subjects. Normalized LF power decreases as the exercise intensity increases, while normalized HF power increases. The power associated with the SF does not change significantly with exercise intensity. Consideration of the running SF component and its aliases is very important in HRV analysis since stride frequency aliases may overlap with LF and HF components.

  20. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    PubMed

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  1. Fast algorithm for the rendering of three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1994-02-01

    It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.

  2. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  3. Acquisition of a full-resolution image and aliasing reduction for a spatially modulated imaging polarimeter with two snapshots

    PubMed Central

    Zhang, Jing; Yuan, Changan; Huang, Guohua; Zhao, Yinjun; Ren, Wenyi; Cao, Qizhi; Li, Jianying; Jin, Mingwu

    2018-01-01

    A snapshot imaging polarimeter using spatial modulation can encode four Stokes parameters allowing instantaneous polarization measurement from a single interferogram. However, the reconstructed polarization images could suffer a severe aliasing signal if the high-frequency component of the intensity image is prominent and occurs in the polarization channels, and the reconstructed intensity image also suffers reduction of spatial resolution due to low-pass filtering. In this work, a method using two anti-phase snapshots is proposed to address the two problems simultaneously. The full-resolution target image and the pure interference fringes can be obtained from the sum and the difference of the two anti-phase interferograms, respectively. The polarization information reconstructed from the pure interference fringes does not contain the aliasing signal from the high-frequency component of the object intensity image. The principles of the method are derived and its feasibility is tested by both computer simulation and a verification experiment. This work provides a novel method for spatially modulated imaging polarization technology with two snapshots to simultaneously reconstruct a full-resolution object intensity image and high-quality polarization components. PMID:29714224

  4. Sampling and Reconstruction of the Pupil and Electric Field for Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Smith, Jeffrey; Aronstein, David

    2012-01-01

    This technology is based on sampling considerations for a band-limited function, which has application to optical estimation generally, and to phase retrieval specifically. The analysis begins with the observation that the Fourier transform of an optical aperture function (pupil) can be implemented with minimal aliasing for Q values down to Q = 1. The sampling ratio, Q, is defined as the ratio of the sampling frequency to the band-limited cut-off frequency. The analytical results are given using a 1-d aperture function, and with the electric field defined by the band-limited sinc(x) function. Perfect reconstruction of the Fourier transform (electric field) is derived using the Whittaker-Shannon sampling theorem for 1

  5. The standardized EEG electrode array of the IFCN.

    PubMed

    Seeck, Margitta; Koessler, Laurent; Bast, Thomas; Leijten, Frans; Michel, Christoph; Baumgartner, Christoph; He, Bin; Beniczky, Sándor

    2017-10-01

    Standardized EEG electrode positions are essential for both clinical applications and research. The aim of this guideline is to update and expand the unifying nomenclature and standardized positioning for EEG scalp electrodes. Electrode positions were based on 20% and 10% of standardized measurements from anatomical landmarks on the skull. However, standard recordings do not cover the anterior and basal temporal lobes, which is the most frequent source of epileptogenic activity. Here, we propose a basic array of 25 electrodes including the inferior temporal chain, which should be used for all standard clinical recordings. The nomenclature in the basic array is consistent with the 10-10-system. High-density scalp EEG arrays (64-256 electrodes) allow source imaging with even sub-lobar precision. This supplementary exam should be requested whenever necessary, e.g. search for epileptogenic activity in negative standard EEG or for presurgical evaluation. In the near future, nomenclature for high density electrodes arrays beyond the 10-10 system needs to be defined, to allow comparison and standardized recordings across centers. Contrary to the established belief that smaller heads needs less electrodes, in young children at least as many electrodes as in adults should be applied due to smaller skull thickness and the risk of spatial aliasing. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  6. The error structure of the SMAP single and dual channel soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    Knowledge of the temporal error structure for remotely-sensed surface soil moisture retrievals can improve our ability to exploit them for hydrology and climate studies. This study employs a triple collocation type analysis to investigate both the total variance and temporal auto-correlation of erro...

  7. The successively temporal error concealment algorithm using error-adaptive block matching principle

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun

    2014-09-01

    Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.

  8. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  9. Temporal information processing in short- and long-term memory of patients with schizophrenia.

    PubMed

    Landgraf, Steffen; Steingen, Joerg; Eppert, Yvonne; Niedermeyer, Ulrich; van der Meer, Elke; Krueger, Frank

    2011-01-01

    Cognitive deficits of patients with schizophrenia have been largely recognized as core symptoms of the disorder. One neglected factor that contributes to these deficits is the comprehension of time. In the present study, we assessed temporal information processing and manipulation from short- and long-term memory in 34 patients with chronic schizophrenia and 34 matched healthy controls. On the short-term memory temporal-order reconstruction task, an incidental or intentional learning strategy was deployed. Patients showed worse overall performance than healthy controls. The intentional learning strategy led to dissociable performance improvement in both groups. Whereas healthy controls improved on a performance measure (serial organization), patients improved on an error measure (inappropriate semantic clustering) when using the intentional instead of the incidental learning strategy. On the long-term memory script-generation task, routine and non-routine events of everyday activities (e.g., buying groceries) had to be generated in either chronological or inverted temporal order. Patients were slower than controls at generating events in the chronological routine condition only. They also committed more sequencing and boundary errors in the inverted conditions. The number of irrelevant events was higher in patients in the chronological, non-routine condition. These results suggest that patients with schizophrenia imprecisely access temporal information from short- and long-term memory. In short-term memory, processing of temporal information led to a reduction in errors rather than, as was the case in healthy controls, to an improvement in temporal-order recall. When accessing temporal information from long-term memory, patients were slower and committed more sequencing, boundary, and intrusion errors. Together, these results suggest that time information can be accessed and processed only imprecisely by patients who provide evidence for impaired time comprehension. This could contribute to symptomatic cognitive deficits and strategic inefficiency in schizophrenia.

  10. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  11. Testing and Implementation of the Navy's Operational Circulation Model for the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Farrar, P. D.; Mask, A. C.

    2012-04-01

    The US Naval Oceanographic Office (NAVOCEANO) has the responsibility for running ocean models in support of Navy operations. NAVOCEANO delivers Navy-relevant global, regional, and coastal ocean forecast products on a 24 hour/7 day a week schedule. In 2011, NAVOCEANO implemented an operational version of the RNCOM (Regional Navy Coastal Ocean Model) for the Mediterranean Sea (MedSea), replacing an older variation of the Princeton Ocean Model originally set up for this area back in the mid-1990's. RNCOM is a gridded model that assimilates both satellite data and in situ profile data in near real time. This 3km MedSea RNCOM is nested within a lower resolution global NCOM in the Atlantic at the 12.5 degree West longitude. Before being accepted as a source of operational products, a Navy ocean model must pass a series of validation tests and then once in service, its skill is monitored by software and regional specialists. This presentation will provide a brief summary of the initial evaluation results. Because of the oceanographic peculiarities of this basin, the MedSea implementation posed a set of new problems for an RNCOM operation. One problem was the present Navy satellite altimetry model assimilation techniques do not improve Mediterranean NCOM forecasts, so it has been turned off, pending improvements. Another problem was that since most in-situ observations were profiling floats with short five-day profiling intervals, there was a problem with temporal aliasing when comparing these observations to the NCOM predictions. Because of the time and spatial correlations in the MedSea and in the model, the observation/model comparisons would give an unrealistically optimistic estimate of model accuracy of the Mediterranean's temperature/salinity structure. Careful pre-selection of profiles for comparison during the evaluation stage, based on spatial distribution and novelty, was used to minimize this effect. NAVOCEANO's operational customers are interested primarily in the detailed features of the vertical temperature profile, and secondarily in the current field — less so salinity, heat content, sea level, etc. The principal form of error in the temperature field is found to be errors in the modeled depth of the mixed layer. Overall model performance was found to be satisfactory for operational use.

  12. Impact of orbit design choices on the gravity field retrieval of Next Generation Gravity Missions - Insights on the ESA-ADDCON project

    NASA Astrophysics Data System (ADS)

    Daras, Ilias; Visser, Pieter; Sneeuw, Nico; van Dam, Tonie; Pail, Roland; Gruber, Thomas; Tabibi, Sajad; Chen, Qiang; Liu, Wei; Tourian, Mohammad; Engels, Johannes; Saemian, Peyman; Siemes, Christian; Haagmans, Roger

    2017-04-01

    Next Generation Gravity Missions (NGGMs) expected to be launched in the mid-term future have set high anticipations for an enhanced monitoring of mass transport in the Earth system, establishing their products applicable to new scientific fields and serving societal needs. The European Space Agency (ESA) has issued several studies on concepts of NGGMs. Following this tradition, the project "Additional Constellations & Scientific Analysis Studies of the Next Generation Gravity Mission" picks up where the previous study ESA-SC4MGV left off. One of the ESA-ADDCON project objectives is to investigate the impact of different orbit configurations and parameters on the gravity field retrieval. Given a two-pair Bender-type constellation, consisting of a polar and an inclined pair, choices for orbit design such as the altitude profile during mission lifetime, the length of retrieval period, the value of sub-cycles and the choice of a prograde over a retrograde orbit are investigated. Moreover, the problem of aliasing due to ocean tide model inaccuracies, as well as methods for mitigating their effect on gravity field solutions are investigated in the context of NGGMs. The performed simulations make use of the gravity field processing approach where low-resolution gravity field solutions are co-parameterized in short-term periods (e.g. daily) together with the long-term solutions (e.g. 11-day solution). This method proved to be beneficial for NGGMs (ESA-SC4MGV project) since the enhanced spatio-temporal sampling enables a self-de-aliasing of high-frequency atmospheric and oceanic signals, which may now be a part of the retrieved signal. The potential added value of having such signals for the first time in near real-time is assessed within the project. This paper demonstrates the preliminary results of the ESA-ADDCON project focusing on aspects of orbit design choices for NGGMs.

  13. Evaluating the impact of above-cloud aerosols on cloud optical depth retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Alfaro, Ricardo

    Using two different operational Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) cloud optical depth (COD) retrievals (visible and shortwave infrared), the impacts of above-cloud absorbing aerosols on the standard COD retrievals are evaluated. For fine-mode aerosol particles, aerosol optical depth (AOD) values diminish sharply from the visible to the shortwave infrared channels. Thus, a suppressed above-cloud particle radiance aliasing effect occurs for COD retrievals using shortwave infrared channels. Aerosol Index (AI) from the spatially and temporally collocated Ozone Monitoring Instrument (OMI) are used to identify above-cloud aerosol particle loading over the southern Atlantic Ocean, including both smoke and dust from the African sub-continent. MODIS and OMI Collocated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data are used to constrain cloud phase and provide contextual above-cloud AOD values. The frequency of occurrence of above-cloud aerosols is depicted on a global scale for the spring and summer seasons from OMI and CALIOP, thus indicating the significance of the problem. Seasonal frequencies for smoke-over-cloud off the southwestern Africa coastline reach 20--50% in boreal summer. We find a corresponding low COD bias of 10--20% for standard MODIS COD retrievals when averaged OMI AI are larger than 1.0. No such bias is found over the Saharan dust outflow region off northern Africa, since both MODIS visible and shortwave in channels are vulnerable to dust particle aliasing, and thus a COD impact cannot be isolated with this method. A similar result is found for a smaller domain, in the Gulf of Tonkin region, from smoke advection over marine stratocumulus clouds and outflow into the northern South China Sea in spring. This study shows the necessity of accounting for the above-cloud aerosol events for future studies using standard MODIS cloud products in biomass burning outflow regions, through the use of collocated OMI AI and supplementary MODIS shortwave infrared COD products.

  14. Spurious One-Month and One-Year Periods in Visual Observations of Variable Stars

    NASA Astrophysics Data System (ADS)

    Percy, J. R.

    2015-12-01

    Visual observations of variable stars, when time-series analyzed with some algorithms such as DC-DFT in vstar, show spurious periods at or close to one synodic month (29.5306 days), and also at about a year, with an amplitude of typically a few hundredths of a magnitude. The one-year periods have been attributed to the Ceraski effect, which was believed to be a physiological effect of the visual observing process. This paper reports on time-series analysis, using DC-DFT in vstar, of visual observations (and in some cases, V observations) of a large number of stars in the AAVSO International Database, initially to investigate the one-month periods. The results suggest that both the one-month and one-year periods are actually due to aliasing of the stars' very low-frequency variations, though they do not rule out very low-amplitude signals (typically 0.01 to 0.02 magnitude) which may be due to a different process, such as a physiological one. Most or all of these aliasing effects may be avoided by using a different algorithm, which takes explicit account of the window function of the data, and/or by being fully aware of the possible presence of and aliasing by very low-frequency variations.

  15. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  16. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    NASA Astrophysics Data System (ADS)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  17. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  18. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  19. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  20. Lattice functions, wavelet aliasing, and SO(3) mappings of orthonormal filters

    NASA Astrophysics Data System (ADS)

    John, Sarah

    1998-01-01

    A formulation of multiresolution in terms of a family of dyadic lattices {Sj;j∈Z} and filter matrices Mj⊂U(2)⊂GL(2,C) illuminates the role of aliasing in wavelets and provides exact relations between scaling and wavelet filters. By showing the {DN;N∈Z+} collection of compactly supported, orthonormal wavelet filters to be strictly SU(2)⊂U(2), its representation in the Euler angles of the rotation group SO(3) establishes several new results: a 1:1 mapping of the {DN} filters onto a set of orbits on the SO(3) manifold; an equivalence of D∞ to the Shannon filter; and a simple new proof for a criterion ruling out pathologically scaled nonorthonormal filters.

  1. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  2. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  3. Spatio-temporal error growth in the multi-scale Lorenz'96 model

    NASA Astrophysics Data System (ADS)

    Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.

    2010-07-01

    The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.

  4. Compressed sensing reconstruction of cardiac cine MRI using golden angle spiral trajectories

    NASA Astrophysics Data System (ADS)

    Tolouee, Azar; Alirezaie, Javad; Babyn, Paul

    2015-11-01

    In dynamic cardiac cine Magnetic Resonance Imaging (MRI), the spatiotemporal resolution is limited by the low imaging speed. Compressed sensing (CS) theory has been applied to improve the imaging speed and thus the spatiotemporal resolution. The purpose of this paper is to improve CS reconstruction of under sampled data by exploiting spatiotemporal sparsity and efficient spiral trajectories. We extend k-t sparse algorithm to spiral trajectories to achieve high spatio temporal resolutions in cardiac cine imaging. We have exploited spatiotemporal sparsity of cardiac cine MRI by applying a 2D + time wavelet-Fourier transform. For efficient coverage of k-space, we have used a modified version of multi shot (interleaved) spirals trajectories. In order to reduce incoherent aliasing artifact, we use different random undersampling pattern for each temporal frame. Finally, we have used nonuniform fast Fourier transform (NUFFT) algorithm to reconstruct the image from the non-uniformly acquired samples. The proposed approach was tested in simulated and cardiac cine MRI data. Results show that higher acceleration factors with improved image quality can be obtained with the proposed approach in comparison to the existing state-of-the-art method. The flexibility of the introduced method should allow it to be used not only for the challenging case of cardiac imaging, but also for other patient motion where the patient moves or breathes during acquisition.

  5. Temporal-difference prediction errors and Pavlovian fear conditioning: role of NMDA and opioid receptors.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2007-10-01

    Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  6. Temporal specificity of reward prediction errors signaled by putative dopamine neurons in rat VTA depends on ventral striatum

    PubMed Central

    Takahashi, Yuji K.; Langdon, Angela J.; Niv, Yael; Schoenbaum, Geoffrey

    2016-01-01

    Summary Dopamine neurons signal reward prediction errors. This requires accurate reward predictions. It has been suggested that the ventral striatum provides these predictions. Here we tested this hypothesis by recording from putative dopamine neurons in the VTA of rats performing a task in which prediction errors were induced by shifting reward timing or number. In controls, the neurons exhibited error signals in response to both manipulations. However, dopamine neurons in rats with ipsilateral ventral striatal lesions exhibited errors only to changes in number and failed to respond to changes in timing of reward. These results, supported by computational modeling, indicate that predictions about the temporal specificity and the number of expected rewards are dissociable, and that dopaminergic prediction-error signals rely on the ventral striatum for the former but not the latter. PMID:27292535

  7. Decomposition of Sources of Errors in Seasonal Streamflow Forecasts in a Rainfall-Runoff Dominated Basin

    NASA Astrophysics Data System (ADS)

    Sinha, T.; Arumugam, S.

    2012-12-01

    Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.

  8. Spatiotemporal Filtering Using Principal Component Analysis and Karhunen-Loeve Expansion Approaches for Regional GPS Network Analysis

    NASA Technical Reports Server (NTRS)

    Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.

    2006-01-01

    Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.

  9. Variability in Post-Error Behavioral Adjustment Is Associated with Functional Abnormalities in the Temporal Cortex in Children with ADHD

    ERIC Educational Resources Information Center

    Spinelli, Simona; Vasa, Roma A.; Joel, Suresh; Nelson, Tess E.; Pekar, James J.; Mostofsky, Stewart H.

    2011-01-01

    Background: Error processing is reflected, behaviorally, by slower reaction times (RT) on trials immediately following an error (post-error). Children with attention-deficit hyperactivity disorder (ADHD) fail to show RT slowing and demonstrate increased intra-subject variability (ISV) on post-error trials. The neural correlates of these behavioral…

  10. Route Learning Impairment in Temporal Lobe Epilepsy

    PubMed Central

    Bell, Brian D.

    2012-01-01

    Memory impairment on neuropsychological tests is relatively common in temporal lobe epilepsy (TLE) patients. But memory rarely has been evaluated in more naturalistic settings. This study assessed TLE (n = 19) and control (n = 32) groups on a real-world route learning (RL) test. Compared to the controls, the TLE group committed significantly more total errors across the three RL test trials. RL errors correlated significantly with standardized auditory and visual memory and visual-perceptual test scores in the TLE group. In the TLE subset for whom hippocampal data were available (n = 14), RL errors also correlated significantly with left hippocampal volume. This is one of the first studies to demonstrate real-world memory impairment in TLE patients and its association with both mesial temporal lobe integrity and standardized memory test performance. The results support the ecological validity of clinical neuropsychological assessment. PMID:23041173

  11. Color, contrast sensitivity, and the cone mosaic.

    PubMed Central

    Williams, D; Sekiguchi, N; Brainard, D

    1993-01-01

    This paper evaluates the role of various stages in the human visual system in the detection of spatial patterns. Contrast sensitivity measurements were made for interference fringe stimuli in three directions in color space with a psychophysical technique that avoided blurring by the eye's optics including chromatic aberration. These measurements were compared with the performance of an ideal observer that incorporated optical factors, such as photon catch in the cone mosaic, that influence the detection of interference fringes. The comparison of human and ideal observer performance showed that neural factors influence the shape as well as the height of the foveal contrast sensitivity function for all color directions, including those that involve luminance modulation. Furthermore, when optical factors are taken into account, the neural visual system has the same contrast sensitivity for isoluminant stimuli seen by the middle-wavelength-sensitive (M) and long-wavelength-sensitive (L) cones and isoluminant stimuli seen by the short-wavelength-sensitive (S) cones. Though the cone submosaics that feed these chromatic mechanisms have very different spatial properties, the later neural stages apparently have similar spatial properties. Finally, we review the evidence that cone sampling can produce aliasing distortion for gratings with spatial frequencies exceeding the resolution limit. Aliasing can be observed with gratings modulated in any of the three directions in color space we used. We discuss mechanisms that prevent aliasing in most ordinary viewing conditions. Images Fig. 1 Fig. 8 PMID:8234313

  12. Off-resonance suppression for multispectral MR imaging near metallic implants.

    PubMed

    den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens

    2015-01-01

    Metal artifact reduction in MRI within clinically feasible scan-times without through-plane aliasing. Existing metal artifact reduction techniques include view angle tilting (VAT), which resolves in-plane distortions, and multispectral imaging (MSI) techniques, such as slice encoding for metal artifact correction (SEMAC) and multi-acquisition with variable resonances image combination (MAVRIC), that further reduce image distortions, but significantly increase scan-time. Scan-time depends on anatomy size and anticipated total spectral content of the signal. Signals outside the anticipated spatial region may cause through-plane back-folding. Off-resonance suppression (ORS), using different gradient amplitudes for excitation and refocusing, is proposed to provide well-defined spatial-spectral selectivity in MSI to allow scan-time reduction and flexibility of scan-orientation. Comparisons of MSI techniques with and without ORS were made in phantom and volunteer experiments. Off-resonance suppressed SEMAC (ORS-SEMAC) and outer-region suppressed MAVRIC (ORS-MAVRIC) required limited through-plane phase encoding steps compared with original MSI. Whereas SEMAC (scan time: 5'46") and MAVRIC (4'12") suffered from through-plane aliasing, ORS-SEMAC and ORS-MAVRIC allowed alias-free imaging in the same scan-times. ORS can be used in MSI to limit the selected spatial-spectral region and contribute to metal artifact reduction in clinically feasible scan-times while avoiding slice aliasing. © 2014 Wiley Periodicals, Inc.

  13. Updating of aversive memories after temporal error detection is differentially modulated by mTOR across development

    PubMed Central

    Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie

    2017-01-01

    The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715

  14. Componential Network for the Recognition of Tool-Associated Actions: Evidence from Voxel-based Lesion-Symptom Mapping in Acute Stroke Patients.

    PubMed

    Martin, Markus; Dressing, Andrea; Bormann, Tobias; Schmidt, Charlotte S M; Kümmerer, Dorothee; Beume, Lena; Saur, Dorothee; Mader, Irina; Rijntjes, Michel; Kaller, Christoph P; Weiller, Cornelius

    2017-08-01

    The study aimed to elucidate areas involved in recognizing tool-associated actions, and to characterize the relationship between recognition and active performance of tool use.We performed voxel-based lesion-symptom mapping in a prospective cohort of 98 acute left-hemisphere ischemic stroke patients (68 male, age mean ± standard deviation, 65 ± 13 years; examination 4.4 ± 2 days post-stroke). In a video-based test, patients distinguished correct tool-related actions from actions with spatio-temporal (incorrect grip, kinematics, or tool orientation) or conceptual errors (incorrect tool-recipient matching, e.g., spreading jam on toast with a paintbrush). Moreover, spatio-temporal and conceptual errors were determined during actual tool use.Deficient spatio-temporal error discrimination followed lesions within a dorsal network in which the inferior parietal lobule (IPL) and the lateral temporal cortex (sLTC) were specifically relevant for assessing functional hand postures and kinematics, respectively. Conversely, impaired recognition of conceptual errors resulted from damage to ventral stream regions including anterior temporal lobe. Furthermore, LTC and IPL lesions impacted differently on action recognition and active tool use, respectively.In summary, recognition of tool-associated actions relies on a componential network. Our study particularly highlights the dissociable roles of LTC and IPL for the recognition of action kinematics and functional hand postures, respectively. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  16. The N/Rev phenomenon in simulating a blade-element rotor system

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    When a simulation model produces frequencies that are beyond the bandwidth of a discrete implementation, anomalous frequencies appear within the bandwidth. Such is the case with blade element models of rotor systems, which are used in the real time, man in the loop simulation environment. Steady state, high frequency harmonics generated by these models, whether aliased or not, obscure piloted helicopter simulation responses. Since these harmonics are attenuated in actual rotorcraft (e.g., because of structural damping), a faithful environment representation for handling qualities purposes may be created from the original model by using certain filtering techniques, as outlined here. These include harmonic consideration, conventional filtering, and decontamination. The process of decontamination is of special interest because frequencies of importance to simulation operation are not attenuated, whereas superimposed aliased harmonics are.

  17. Signal conditioning units for vibration measurement in HUMS

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Liu, Tingting; Yu, Zirong; Chen, Lijuan; Huang, Xinjie

    2018-03-01

    A signal conditioning units for vibration measurement in HUMS is proposed in the paper. Due to the frequency of vibrations caused by components in helicopter are different, two steps amplifier and programmable anti-aliasing filter are designed to meet the measurement of different types of helicopter. Vibration signals are converted into measurable electrical signals combing with ICP driver firstly. Then pre-amplifier and programmable gain amplifier is applied to magnify the weak electrical signals. In addition, programmable anti-aliasing filter is utilized to filter the interference of noise. The units were tested using function signal generator and oscilloscope. The experimental results have demonstrated the effectiveness of our proposed method in quantitatively and qualitatively. The method presented in this paper can meet the measurement requirement for different types of helicopter.

  18. Luma-chroma space filter design for subpixel-based monochrome image downsampling.

    PubMed

    Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng

    2013-10-01

    In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.

  19. Striping artifact reduction in lunar orbiter mosaic images

    USGS Publications Warehouse

    Mlsna, P.A.; Becker, T.

    2006-01-01

    Photographic images of the moon from the 1960s Lunar Orbiter missions are being processed into maps for visual use. The analog nature of the images has produced numerous artifacts, the chief of which causes a vertical striping pattern in mosaic images formed from a series of filmstrips. Previous methods of stripe removal tended to introduce ringing and aliasing problems in the image data. This paper describes a recently developed alternative approach that succeeds at greatly reducing the striping artifacts while avoiding the creation of ringing and aliasing artifacts. The algorithm uses a one dimensional frequency domain step to deal with the periodic component of the striping artifact and a spatial domain step to handle the aperiodic residue. Several variations of the algorithm have been explored. Results, strengths, and remaining challenges are presented. ?? 2006 IEEE.

  20. Towards real-time thermometry using simultaneous multislice MRI

    NASA Astrophysics Data System (ADS)

    Borman, P. T. S.; Bos, C.; de Boorder, T.; Raaymakers, B. W.; Moonen, C. T. W.; Crijns, S. P. M.

    2016-09-01

    MR-guided thermal therapies, such as high-intensity focused ultrasound (MRgHIFU) and laser-induced thermal therapy (MRgLITT) are increasingly being applied in oncology and neurology. MRI is used for guidance since it can measure temperature noninvasively based on the proton resonance frequency shift (PRFS). For therapy guidance using PRFS thermometry, high temporal resolution and large spatial coverage are desirable. We propose to use the parallel imaging technique simultaneous multislice (SMS) in combination with controlled aliasing (CAIPIRINHA) to accelerate the acquisition. We compare this with the sensitivity encoding (SENSE) acceleration technique. Two experiments were performed to validate that SMS can be used to increase the spatial coverage or the temporal resolution. The first was performed in agar gel using LITT heating and a gradient-echo sequence with echo-planar imaging (EPI), and the second was performed in bovine muscle using HIFU heating and a gradient-echo sequence without EPI. In both experiments temperature curves from an unaccelerated scan and from SMS, SENSE, and SENSE/SMS accelerated scans were compared. The precision was quantified by a standard deviation analysis of scans without heating. Both experiments showed a good agreement between the temperature curves obtained from the unaccelerated, and SMS accelerated scans, confirming that accuracy was maintained during SMS acceleration. The standard deviations of the temperature measurements obtained with SMS were significantly smaller than when SENSE was used, implying that SMS allows for higher acceleration. In the LITT and HIFU experiments SMS factors up to 4 and 3 were reached, respectively, with a loss of precision of less than a factor of 3. Based on these results we conclude that SMS acceleration of PRFS thermometry is a valuable addition to SENSE, because it allows for a higher temporal resolution or bigger spatial coverage, with a higher precision.

  1. Effects of hemisphere speech dominance and seizure focus on patterns of behavioral response errors for three types of stimuli.

    PubMed

    Rausch, R; MacDonald, K

    1997-03-01

    We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.

  2. Rapid whole-brain resting-state fMRI at 3 T: Efficiency-optimized three-dimensional EPI versus repetition time-matched simultaneous-multi-slice EPI.

    PubMed

    Stirnberg, Rüdiger; Huijbers, Willem; Brenner, Daniel; Poser, Benedikt A; Breteler, Monique; Stöcker, Tony

    2017-12-01

    State-of-the-art simultaneous-multi-slice (SMS-)EPI and 3D-EPI share several properties that benefit functional MRI acquisition. Both sequences employ equivalent parallel imaging undersampling with controlled aliasing to achieve high temporal sampling rates. As a volumetric imaging sequence, 3D-EPI offers additional means of acceleration complementary to 2D-CAIPIRINHA sampling, such as fast water excitation and elliptical sampling. We performed an application-oriented comparison between a tailored, six-fold CAIPIRINHA-accelerated 3D-EPI protocol at 530 ms temporal and 2.4 mm isotropic spatial resolution and an SMS-EPI protocol with identical spatial and temporal resolution for whole-brain resting-state fMRI at 3 T. The latter required eight-fold slice acceleration to compensate for the lack of elliptical sampling and fast water excitation. Both sequences used vendor-supplied on-line image reconstruction. We acquired test/retest resting-state fMRI scans in ten volunteers, with simultaneous acquisition of cardiac and respiration data, subsequently used for optional physiological noise removal (nuisance regression). We found that the 3D-EPI protocol has significantly increased temporal signal-to-noise ratio throughout the brain as compared to the SMS-EPI protocol, especially when employing motion and nuisance regression. Both sequence types reliably identified known functional networks with stronger functional connectivity values for the 3D-EPI protocol. We conclude that the more time-efficient 3D-EPI primarily benefits from reduced parallel imaging noise due to a higher, actual k-space sampling density compared to SMS-EPI. The resultant BOLD sensitivity increase makes 3D-EPI a valuable alternative to SMS-EPI for whole-brain fMRI at 3 T, with voxel sizes well below 3 mm isotropic and sampling rates high enough to separate dominant cardiac signals from BOLD signals in the frequency domain. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Intercepting moving targets: does memory from practice in a specific condition of target displacement affect movement timing?

    PubMed

    de Azevedo Neto, Raymundo Machado; Teixeira, Luis Augusto

    2011-05-01

    This investigation aimed at assessing the extent to which memory from practice in a specific condition of target displacement modulates temporal errors and movement timing of interceptive movements. We compared two groups practicing with certainty of future target velocity either in unchanged target velocity or in target velocity decrease. Following practice, both experimental groups were probed in the situations of unchanged target velocity and target velocity decrease either under the context of certainty or uncertainty about target velocity. Results from practice showed similar improvement of temporal accuracy between groups, revealing that target velocity decrease did not disturb temporal movement organization when fully predictable. Analysis of temporal errors in the probing trials indicated that both groups had higher timing accuracy in velocity decrease in comparison with unchanged velocity. Effect of practice was detected by increased temporal accuracy of the velocity decrease group in situations of decreased velocity; a trend consistent with the expected effect of practice was observed for temporal errors in the unchanged velocity group and in movement initiation at a descriptive level. An additional point of theoretical interest was the fast adaptation in both groups to a target velocity pattern different from that practiced. These points are discussed under the perspective of integration of vision and motor control by means of an internal forward model of external motion.

  4. Event-related potentials in response to violations of content and temporal event knowledge.

    PubMed

    Drummer, Janna; van der Meer, Elke; Schaadt, Gesa

    2016-01-08

    Scripts that store knowledge of everyday events are fundamentally important for managing daily routines. Content event knowledge (i.e., knowledge about which events belong to a script) and temporal event knowledge (i.e., knowledge about the chronological order of events in a script) constitute qualitatively different forms of knowledge. However, there is limited information about each distinct process and the time course involved in accessing content and temporal event knowledge. Therefore, we analyzed event-related potentials (ERPs) in response to either correctly presented event sequences or event sequences that contained a content or temporal error. We found an N400, which was followed by a posteriorly distributed P600 in response to content errors in event sequences. By contrast, we did not find an N400 but an anteriorly distributed P600 in response to temporal errors in event sequences. Thus, the N400 seems to be elicited as a response to a general mismatch between an event and the established event model. We assume that the expectancy violation of content event knowledge, as indicated by the N400, induces the collapse of the established event model, a process indicated by the posterior P600. The expectancy violation of temporal event knowledge is assumed to induce an attempt to reorganize the event model in working memory, a process indicated by the frontal P600. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Spatio-temporal modeling and optimization of a deformable-grating compressor for short high-energy laser pulses

    DOE PAGES

    Qiao, Jie; Papa, J.; Liu, X.

    2015-09-24

    Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less

  6. Animal movement constraints improve resource selection inference in the presence of telemetry error

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.

    2016-01-01

    Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.

  7. [Investigating phonological planning processes in speech production through a speech-error induction technique].

    PubMed

    Nakayama, Masataka; Saito, Satoru

    2015-08-01

    The present study investigated principles of phonological planning, a common serial ordering mechanism for speech production and phonological short-term memory. Nakayama and Saito (2014) have investigated the principles by using a speech-error induction technique, in which participants were exposed to an auditory distracIor word immediately before an utterance of a target word. They demonstrated within-word adjacent mora exchanges and serial position effects on error rates. These findings support, respectively, the temporal distance and the edge principles at a within-word level. As this previous study induced errors using word distractors created by exchanging adjacent morae in the target words, it is possible that the speech errors are expressions of lexical intrusions reflecting interactive activation of phonological and lexical/semantic representations. To eliminate this possibility, the present study used nonword distractors that had no lexical or semantic representations. This approach successfully replicated the error patterns identified in the abovementioned study, further confirming that the temporal distance and edge principles are organizing precepts in phonological planning.

  8. High-Resolution Gravity and Time-Varying Gravity Field Recovery using GRACE and CHAMP

    NASA Technical Reports Server (NTRS)

    Shum, C. K.

    2002-01-01

    This progress report summarizes the research work conducted under NASA's Solid Earth and Natural Hazards Program 1998 (SENH98) entitled High Resolution Gravity and Time Varying Gravity Field Recovery Using GRACE (Gravity Recovery and Climate Experiment) and CHAMP (Challenging Mini-satellite Package for Geophysical Research and Applications), which included a no-cost extension time period. The investigation has conducted pilot studies to use the simulated GRACE and CHAMP data and other in situ and space geodetic observable, satellite altimeter data, and ocean mass variation data to study the dynamic processes of the Earth which affect climate change. Results from this investigation include: (1) a new method to use the energy approach for expressing gravity mission data as in situ measurements with the possibility to enhance the spatial resolution of the gravity signal; (2) the method was tested using CHAMP and validated with the development of a mean gravity field model using CHAMP data, (3) elaborate simulation to quantify errors of tides and atmosphere and to recover hydrological and oceanic signals using GRACE, results show that there are significant aliasing effect and errors being amplified in the GRACE resonant geopotential and it is not trivial to remove these errors, and (4) quantification of oceanic and ice sheet mass changes in a geophysical constraint study to assess their contributions to global sea level change, while the results improved significant over the use of previous studies using only the SLR (Satellite Laser Ranging)-determined zonal gravity change data, the constraint could be further improved with additional information on mantle rheology, PGR (Post-Glacial Rebound) and ice loading history. A list of relevant presentations and publications is attached, along with a summary of the SENH investigation generated in 2000.

  9. A comparison of earthquake backprojection imaging methods for dense local arrays

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.

  10. Interleaved Spiral-In/Out with Application to fMRI

    PubMed Central

    Law, Christine S.; Glover, Gary H.

    2009-01-01

    The conventional spiral-in/out trajectory samples k-space sufficiently in the spiral-in path and sufficiently in the spiral-out path to enable creation of separate images. We propose an interleaved spiral-in/out trajectory comprising a spiral-in path that gathers half of the k-space data, and a complimentary spiral-out path that gathers the other half. The readout duration is thereby reduced by approximately half, offering two distinct advantages: reduction of signal dropout due to susceptibility-induced field gradients (at the expense of signal-to-noise ratio), and the ability to achieve higher spatial resolution when the readout duration is identical to the conventional method. Two reconstruction methods are described; both involve temporal filtering to remove aliasing artifacts. Empirically, interleaved spiral-in/out images are free from false activation resulting from signal pileup around the air/tissue interface, which is common in the conventional spiral-out method. Comparisons with conventional methods using a hyperoxia stimulus reveal greater frontal-orbital activation volumes but a slight reduction of overall activation in other brain regions. PMID:19449373

  11. An ISEE 3 high time resolution study of interplanetary parameter correlations with magnetospheric activity

    NASA Technical Reports Server (NTRS)

    Baker, D. N.; Zwickl, R. D.; Bame, S. J.; Hones, E. W., Jr.; Tsurutani, B. T.; Smith, E. J.; Akasofu, S.-I.

    1983-01-01

    The coupling between the solar wind and the geomagnetic disturbances was examined using data from the ISEE-3 spacecraft at an earth-sun libration point and ground-based data. One minute data were used to avoid aliasing in determining the internal magnetospheric response to solar wind conditions. Attention was given to the cross-correlations between the geomagnetic index (AE), the total energy dissipation rate (UT), and the solar wind parameters, as well as the spatial and temporal scales on which the magnetosphere reacts to the solar wind conditions. It was considered necessary to characterize the physics of the solar wind-magnetosphere coupling in order to define the requirements for a spacecraft like the ISEE-3 that could be used as a real time monitoring system for predicting storms and substorms. The correlations among all but one parameter were lower during disturbance intervals; UT was highly correlated with all parameters during the disturbed times. An intrinsic 25-40 min delay was detected between interplanetary activity and magnetospheric response in quite times, diminishing to no more than 15 min during disturbed times.

  12. Combining BRITE and ground-based photometry for the β Cephei star ν Eridani: impact on photometric pulsation mode identification and detection of several g modes

    NASA Astrophysics Data System (ADS)

    Handler, G.; Rybicka, M.; Popowicz, A.; Pigulski, A.; Kuschnig, R.; Zocłońska, E.; Moffat, A. F. J.; Weiss, W. W.; Grant, C. C.; Pablo, H.; Whittaker, G. N.; Ruciński, S. M.; Ramiaramanantsoa, T.; Zwintz, K.; Wade, G. A.

    2017-01-01

    We report a simultaneous ground- and space-based photometric study of the β Cephei star ν Eridani. Half a year of observations have been obtained by four of the five satellites constituting BRITE-Constellation, supplemented with ground-based photoelectric photometry. We show that carefully combining the two data sets virtually eliminates the aliasing problem that often hampers time series analyses. We detect 40 periodic signals intrinsic to the star in the light curves. Despite a lower detection limit, we do not recover all the pressure and mixed modes previously reported in the literature, but we newly detect six additional gravity modes. This behaviour is a consequence of temporal changes in the pulsation amplitudes that we also detected for some of the p modes. We point out that the dependence of theoretically predicted pulsation amplitude on wavelength is steeper in visual passbands than those observationally measured, to the extent that three dominant pulsation modes of ν Eridani would be incorrectly identified using data in optical filters only. We discuss possible reasons for this discrepancy.

  13. Point target detection utilizing super-resolution strategy for infrared scanning oversampling system

    NASA Astrophysics Data System (ADS)

    Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei

    2017-11-01

    To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.

  14. Digital Paper Technologies for Topographical Applications

    DTIC Science & Technology

    2011-09-19

    measures examine were training time for each method, time for entry offeatures, procedural errors, handwriting recognition errors, and user preference...time for entry of features, procedural errors, handwriting recognition errors, and user preference. For these metrics, temporal association was...checkbox, text restricted to a specific list of values, etc.) that provides constraints to the handwriting recognizer. When the user fills out the form

  15. Evaluation of the Leap Motion Controller during the performance of visually-guided upper limb movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James

    2018-01-01

    Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.

  16. Evaluation of the Leap Motion Controller during the performance of visually-guided upper limb movements

    PubMed Central

    Gonzalez, David; Nouredanesh, Mina; Tung, James

    2018-01-01

    Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064

  17. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation

    PubMed Central

    Balachandran, Ramya; Labadie, Robert F.

    2015-01-01

    Purpose A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. Methods An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. Results The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of 45° and higher as well as longer cantilevered drill lengths. Conclusion The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure. PMID:26183149

  18. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation.

    PubMed

    Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F

    2016-03-01

    A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.

  19. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  20. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  1. Mapping nonlinear shallow-water tides: a look at the past and future

    NASA Astrophysics Data System (ADS)

    Andersen, Ole B.; Egbert, Gary D.; Erofeeva, Svetlana Y.; Ray, Richard D.

    2006-12-01

    Overtides and compound tides are generated by nonlinear mechanisms operative primarily in shallow waters. Their presence complicates tidal analysis owing to the multitude of new constituents and their possible frequency overlap with astronomical tides. The science of nonlinear tides was greatly advanced by the pioneering researches of Christian Le Provost who employed analytical theory, physical modeling, and numerical modeling in many extensive studies, especially of the tides of the English Channel. Le Provost’s complementary work with satellite altimetry motivates our attempts to merge these two interests. After a brief review, we describe initial steps toward the assimilation of altimetry into models of nonlinear tides via generalized inverse methods. A series of barotropic inverse solutions is computed for the M_4 tide over the northwest European Shelf. Future applications of altimetry to regions with fewer in situ measurements will require improved understanding of error covariance models because these control the tradeoffs between fitting hydrodynamics and data, a delicate issue in coastal regions. While M_4 can now be robustly determined along the Topex/Poseidon satellite ground tracks, many other compound tides face serious aliasing problems.

  2. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  3. Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.

    PubMed

    Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan

    2018-05-01

    RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.

  4. A review of ultra-short pulse lasers for military remote sensing and rangefinding

    NASA Astrophysics Data System (ADS)

    Lamb, Robert A.

    2009-09-01

    Advances in ultra-short pulse laser technology have resulted in commercially available laser systems capable of generating high peak powers >1GW in tabletop systems. This opens the prospect of generating very wide spectral emissions with a combination of non-linear optical effects in photonic crystal fibres to produce supercontinuua in systems that are readily accessible to military applications. However, military remote sensing rarely requires bandwidths spanning two octaves and it is clear that efficient systems require controlled spectral emission in relevant bands. Furthermore, the limited spectral responsivity of focal plane arrays may impose further restriction on the usable spectrum. A recent innovation which temporally encodes a spectrum using group velocity dispersion allows detection with a photodiode, opening the prospect for high speed hyperspectral sensing and imaging. At the opposite end of the power spectrum, ultra-low power remote sensing using time-correlated single photon counting (SPC) has reduced the laser power requirement and demonstrated remote sensing over 5km during daylight with repetition rates of ~10MHz with ps pulses. Recent research has addressed uncorrelated SPC and waveform transmission to increase data rates for absolute rangefinding whilst avoiding range aliasing. This achievement opens the prospect of combining SPC with high repetition rate temporal encoding of supercontinuua to realise practical hyperspectral remote sensing lidar. The talk will present an overview of these technologies and present a concept which combines them into a single system for high-speed hyperspectral imaging and remote sensing.

  5. Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits

    NASA Astrophysics Data System (ADS)

    Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.

    2017-08-01

    Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.

  6. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  7. Application of digital image processing techniques to astronomical imagery 1980

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1981-01-01

    Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.

  8. Reducing representativeness and sampling errors in radio occultation-radiosonde comparisons

    NASA Astrophysics Data System (ADS)

    Gilpin, Shay; Rieckh, Therese; Anthes, Richard

    2018-05-01

    Radio occultation (RO) and radiosonde (RS) comparisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, temporal and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolutions of RO and RS profiles vary and vertical representativeness errors may also affect the comparison. In RO-RS comparisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3-6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS profiles to a common vertical resolution reduces representativeness errors. We then test two methods of reducing horizontal sampling errors during RO-RS comparisons: restricting co-location pairs to within ellipses oriented along the direction of wind flow rather than circles and applying a spatial-temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ˜ 666 km radius), small circle ( ˜ 300 km radius), and ellipse parallel to the wind direction ( ˜ 666 km semi-major axis, ˜ 133 km semi-minor axis). We also apply a spatial-temporal sampling correction using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water vapor pressure differences relative to RMS differences within the large circle and produces differences that are comparable to or less than the RMS differences within circles of similar area. Applying the sampling correction shows the most significant reduction in RMS differences, such that RMS differences are nearly identical to the sampling correction regardless of the geometric constraints. We conclude that implementing the spatial-temporal sampling correction using a reliable model will most effectively reduce sampling errors during RO-RS comparisons; however, if a reliable model is not available, restricting spatial comparisons to within an ellipse parallel to the wind flow will reduce sampling errors caused by horizontal atmospheric variability.

  9. Prediction error and trace dominance determine the fate of fear memories after post-training manipulations

    PubMed Central

    Alfei, Joaquín M.; Ferrer Monti, Roque I.; Molina, Victor A.; Bueno, Adrián M.

    2015-01-01

    Different mnemonic outcomes have been observed when associative memories are reactivated by CS exposure and followed by amnestics. These outcomes include mere retrieval, destabilization–reconsolidation, a transitional period (which is insensitive to amnestics), and extinction learning. However, little is known about the interaction between initial learning conditions and these outcomes during a reinforced or nonreinforced reactivation. Here we systematically combined temporally specific memories with different reactivation parameters to observe whether these four outcomes are determined by the conditions established during training. First, we validated two training regimens with different temporal expectations about US arrival. Then, using Midazolam (MDZ) as an amnestic agent, fear memories in both learning conditions were submitted to retraining either under identical or different parameters to the original training. Destabilization (i.e., susceptibly to MDZ) occurred when reactivation was reinforced, provided the occurrence of a temporal prediction error about US arrival. In subsequent experiments, both treatments were systematically reactivated by nonreinforced context exposure of different lengths, which allowed to explore the interaction between training and reactivation lengths. These results suggest that temporal prediction error and trace dominance determine the extent to which reactivation produces the different outcomes. PMID:26179232

  10. High temporal resolution aberrometry in a 50-eye population and implications for adaptive optics error budget.

    PubMed

    Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge

    2017-04-01

    We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n -2 power-law with radial order n and temporal spectra follow a f -1.5 power-law with temporal frequency f . From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates.

  11. High temporal resolution aberrometry in a 50-eye population and implications for adaptive optics error budget

    PubMed Central

    Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge

    2017-01-01

    We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n−2 power-law with radial order n and temporal spectra follow a f−1.5 power-law with temporal frequency f. From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates. PMID:28736657

  12. Unavoidable Errors: A Spatio-Temporal Analysis of Time-Course and Neural Sources of Evoked Potentials Associated with Error Processing in a Speeded Task

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2008-01-01

    The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…

  13. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  14. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  15. Improving temporal resolution in fMRI using a 3D spiral acquisition and low rank plus sparse (L+S) reconstruction.

    PubMed

    Petrov, Andrii Y; Herbst, Michael; Andrew Stenger, V

    2017-08-15

    Rapid whole-brain dynamic Magnetic Resonance Imaging (MRI) is of particular interest in Blood Oxygen Level Dependent (BOLD) functional MRI (fMRI). Faster acquisitions with higher temporal sampling of the BOLD time-course provide several advantages including increased sensitivity in detecting functional activation, the possibility of filtering out physiological noise for improving temporal SNR, and freezing out head motion. Generally, faster acquisitions require undersampling of the data which results in aliasing artifacts in the object domain. A recently developed low-rank (L) plus sparse (S) matrix decomposition model (L+S) is one of the methods that has been introduced to reconstruct images from undersampled dynamic MRI data. The L+S approach assumes that the dynamic MRI data, represented as a space-time matrix M, is a linear superposition of L and S components, where L represents highly spatially and temporally correlated elements, such as the image background, while S captures dynamic information that is sparse in an appropriate transform domain. This suggests that L+S might be suited for undersampled task or slow event-related fMRI acquisitions because the periodic nature of the BOLD signal is sparse in the temporal Fourier transform domain and slowly varying low-rank brain background signals, such as physiological noise and drift, will be predominantly low-rank. In this work, as a proof of concept, we exploit the L+S method for accelerating block-design fMRI using a 3D stack of spirals (SoS) acquisition where undersampling is performed in the k z -t domain. We examined the feasibility of the L+S method to accurately separate temporally correlated brain background information in the L component while capturing periodic BOLD signals in the S component. We present results acquired in control human volunteers at 3T for both retrospective and prospectively acquired fMRI data for a visual activation block-design task. We show that a SoS fMRI acquisition with an acceleration of four and L+S reconstruction can achieve a brain coverage of 40 slices at 2mm isotropic resolution and 64 x 64 matrix size every 500ms. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  17. Quantification of errors induced by temporal resolution on Lagrangian particles in an eddy-resolving model

    NASA Astrophysics Data System (ADS)

    Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander

    2014-04-01

    Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.

  18. Directional selection in temporally replicated studies is remarkably consistent.

    PubMed

    Morrissey, Michael B; Hadfield, Jarrod D

    2012-02-01

    Temporal variation in selection is a fundamental determinant of evolutionary outcomes. A recent paper presented a synthetic analysis of temporal variation in selection in natural populations. The authors concluded that there is substantial variation in the strength and direction of selection over time, but acknowledged that sampling error would result in estimates of selection that were more variable than the true values. We reanalyze their dataset using techniques that account for the necessary effect of sampling error to inflate apparent levels of variation and show that directional selection is remarkably constant over time, both in magnitude and direction. Thus we cannot claim that the available data support the existence of substantial temporal heterogeneity in selection. Nonetheless, we conject that temporal variation in selection could be important, but that there are good reasons why it may not appear in the available data. These new analyses highlight the importance of applying techniques that estimate parameters of the distribution of selection, rather than parameters of the distribution of estimated selection (which will reflect both sampling error and "real" variation in selection); indeed, despite availability of methods for the former, focus on the latter has been common in synthetic reviews of the aspects of selection in nature, and can lead to serious misinterpretations. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  19. Integrating Map Algebra and Statistical Modeling for Spatio- Temporal Analysis of Monthly Mean Daily Incident Photosynthetically Active Radiation (PAR) over a Complex Terrain.

    PubMed

    Evrendilek, Fatih

    2007-12-12

    This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.

  20. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  1. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error.

    PubMed

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo

    2014-05-01

    Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.

  2. Optoelectronic image scanning with high spatial resolution and reconstruction fidelity

    NASA Astrophysics Data System (ADS)

    Craubner, Siegfried I.

    2002-02-01

    In imaging systems the detector arrays deliver at the output time-discrete signals, where the spatial frequencies of the object scene are mapped into the electrical signal frequencies. Since the spatial frequency spectrum cannot be bandlimited by the front optics, the usual detector arrays perform a spatial undersampling and as a consequence aliasing occurs. A means to partially suppress the backfolded alias band is bandwidth limitation in the reconstruction low-pass, at the price of resolution loss. By utilizing a bilinear detector array in a pushbroom-type scanner, undersampling and aliasing can be overcome. For modeling the perception, the theory of discrete systems and multirate digital filter banks is applied, where aliasing cancellation and perfect reconstruction play an important role. The discrete transfer function of a bilinear array can be imbedded into the scheme of a second-order filter bank. The detector arrays already build the analysis bank and the overall filter bank is completed with the synthesis bank, for which stabilized inverse filters are proposed, to compensate for the low-pass characteristics and to approximate perfect reconstruction. The synthesis filter branch can be realized in a so-called `direct form,' or the `polyphase form,' where the latter is an expenditure-optimal solution, which gives advantages when implemented in a signal processor. This paper attempts to introduce well-established concepts of the theory of multirate filter banks into the analysis of scanning imagers, which is applicable in a much broader sense than for the problems addressed here. To the author's knowledge this is also a novelty.

  3. Geometric error characterization and error budgets. [thematic mapper

    NASA Technical Reports Server (NTRS)

    Beyer, E.

    1982-01-01

    Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.

  4. Environmental monitoring and assessment of landscape dynamics in southern coast of the Caspian Sea through intensity analysis and imprecise land-use data.

    PubMed

    Hasani, Mohammad; Sakieh, Yousef; Dezhkam, Sadeq; Ardakani, Tahereh; Salmanmahiny, Abdolrassoul

    2017-04-01

    A hierarchical intensity analysis of land-use change is applied to evaluate the dynamics of a coupled urban coastal system in Rasht County, Iran. Temporal land-use layers of 1987, 1999, and 2011 are employed, while spatial accuracy metrics are only available for 2011 data (overall accuracy of 94%). The errors in 1987 and 1999 layers are unknown, which can influence the accuracy of temporal change information. Such data were employed to examine the size and the type of errors that could justify deviations from uniform change intensities. Accordingly, errors comprising 3.31 and 7.47% of 1999 and 2011 maps, respectively, could explain all differences from uniform gains and errors including 5.21 and 1.81% of 1987 and 1999 maps, respectively, could explain all deviations from uniform losses. Additional historical information is also applied for uncertainty assessment and to separate probable map errors from actual land-use changes. In this regard, historical processes in Rasht County can explain different types of transition that are either consistent or inconsistent to known processes. The intensity analysis assisted in identification of systematic transitions and detection of competitive categories, which cannot be investigated through conventional change detection methods. Based on results, built-up area is the most active gaining category in the area and wetland category with less areal extent is more sensitive to intense land-use change processes. Uncertainty assessment results also indicated that there are no considerable classification errors in temporal land-use data and these imprecise layers can reliably provide implications for informed decision making.

  5. Effective regurgitant orifice area by the color Doppler flow convergence method for evaluating the severity of chronic aortic regurgitation. An animal study.

    PubMed

    Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J

    1996-02-01

    The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.

  6. T-phase and tsunami signals recorded by IMS hydrophone triplets during the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Matsumoto, H.; Haralabus, G.; Zampolli, M.; Ozel, N. M.; Yamada, T.; Mark, P. K.

    2016-12-01

    A hydrophone station of the International Monitoring System (IMS) of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) is used to estimate the back-azimuth of T-phase signals generated by the 2011 Tohoku earthquake. Among the 6 IMS hydrophone stations required by the Treaty, 5 stations consist of two triplets, with the exception of HA1 (Australia), which has only one. The hydrophones of each triplet are suspended in the SOFAR channel and arranged to form an equilateral triangle with each side being approximately two kilometers long. The waveforms from the Tohoku earthquake were received at HA11, located on Wake Island, which is located approximately 3100 km south-east of the earthquake epicenter. The frequency range used in the array analysis was chosen to be less than 0.375 Hz, which assumed the target phase velocity to be 1.5 km/s for T-phases. The T-phase signals that originated from the seismic source however show peaks in the frequency band above one Hz. As a result of the inter-element distances of 2 km, spatial aliasing is observed in the frequency-wavenumber analysis (F-K analysis) if the entire 100 Hz bandwidth of the hydrophones is used. This spatial aliasing is significant because the distance between hydrophones in the triplet is large in comparison to the ratio between the phase velocity of T-phase signals and the frequency. To circumvent this spatial aliasing problem, a three-step processing technique used in seismic array analysis is applied: (1) high-pass filtering above 1 Hz to retrieve the T-phase, followed by (2) extraction of the envelope of this signal to highlight the T-phase contribution, and finally (3) low-pass filtering of the envelope below 0.375 Hz. The F-K analysis provides accurate back-azimuth and slowness estimations without spatial aliasing. Deconvolved waveforms are also processed to retrieve tsunami components by using a three-pole model of the frequency-amplitude-phase (FAP) response below 0.1 Hz and the measured sensor response for higher frequencies. It is also shown that short-period pressure fluctuations recorded by the IMS hydrophones correspond to theoretical dispersion curves of tsunamis. Thus, short-period dispersive tsunami signals can be identified by the IMS hydrophone triplets.

  7. Error Estimation in an Optimal Interpolation Scheme for High Spatial and Temporal Resolution SST Analyses

    NASA Technical Reports Server (NTRS)

    Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn

    2010-01-01

    Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.

  8. Response Errors Explain the Failure of Independent-Channels Models of Perception of Temporal Order

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2012-01-01

    Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models) have been ruled out because two formal properties of these models (monotonicity and parallelism) are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded) may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal order judgment data are discussed. PMID:22493586

  9. Support for Anterior Temporal Involvement in Semantic Error Production in Aphasia: New Evidence from VLSM

    ERIC Educational Resources Information Center

    Walker, Grant M.; Schwartz, Myrna F.; Kimberg, Daniel Y.; Faseyitan, Olufunsho; Brecher, Adelyn; Dell, Gary S.; Coslett, H. Branch

    2011-01-01

    Semantic errors in aphasia (e.g., naming a horse as "dog") frequently arise from faulty mapping of concepts onto lexical items. A recent study by our group used voxel-based lesion-symptom mapping (VLSM) methods with 64 patients with chronic aphasia to identify voxels that carry an association with semantic errors. The strongest associations were…

  10. Wisconsin Card Sorting Test performance and impulsivity in patients with temporal lobe epilepsy: suicidal risk and suicide attempts.

    PubMed

    Garcia Espinosa, Arlety; Andrade Machado, René; Borges González, Susana; García González, María Eugenia; Pérez Montoto, Ariadna; Toledo Sotomayor, Guillermo

    2010-01-01

    The goal of the study described here was to determine if executive dysfunction and impulsivity are related to risk for suicide and suicide attempts in patients with temporal lobe epilepsy. Forty-two patients with temporal lobe epilepsy were recruited. A detailed medical history, neurological examination, serial EEGs, Mini-International Neuropsychiatric Interview, executive function, and MRI were assessed. Multiple regression analysis was carried out to examine predictive associations between clinical variables and Wisconsin Card Sorting Test measures. Patients' scores on the Risk for Suicide Scale (n=24) were greater than 7, which means they had the highest relative risk for suicide attempts. Family history of psychiatric disease, current major depressive episode, left temporal lobe epilepsy, and perseverative responses and total errors on the Wisconsin Card Sorting Test increased by 6.3 and 7.5 suicide risk and suicide attempts, respectively. Executive dysfunction (specifically perseverative responses and more total errors) contributed greatly to suicide risk. Executive performance has a major impact on suicide risk and suicide attempts in patients with temporal lobe epilepsy. 2009 Elsevier Inc. All rights reserved.

  11. Implicit transfer of reversed temporal structure in visuomotor sequence learning.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2014-04-01

    Some spatio-temporal structures are easier to transfer implicitly in sequential learning. In this study, we investigated whether the consistent reversal of triads of learned components would support the implicit transfer of their temporal structure in visuomotor sequence learning. A triad comprised three sequential button presses ([1][2][3]) and seven consecutive triads comprised a sequence. Participants learned sequences by trial and error, until they could complete it 20 times without error. Then, they learned another sequence, in which each triad was reversed ([3][2][1]), partially reversed ([2][1][3]), or switched so as not to overlap with the other conditions ([2][3][1] or [3][1][2]). Even when the participants did not notice the alternation rule, the consistent reversal of the temporal structure of each triad led to better implicit transfer; this was confirmed in a subsequent experiment. These results suggest that the implicit transfer of the temporal structure of a learned sequence can be influenced by both the structure and consistency of the change. Copyright © 2013 Cognitive Science Society, Inc.

  12. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  13. The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals

    NASA Astrophysics Data System (ADS)

    Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat

    2018-01-01

    Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.

  14. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Hydraulic Tomography and the Curse of Storativity

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Li, W.; Englert, A.

    2006-12-01

    Pumping tests are among the most common techniques for hydrogeological site investigation. Their traditional analysis is based on fitting analytical expressions to measured time series of drawdown. These expressions were derived for homogeneous conditions, whereas all natural aquifers are heterogeneous. The mentioned conceptual inconsistency complicates the hydrogeological interpretation of the obtained coefficients. In particularly, it has been shown that the heterogeneity of transmissivity is aliased to variability in the estimated storativity. In hydraulic tomography, multiple pumping tests are jointly analyzed. The hydraulic parameters to be estimated are allowed to fluctuate in space. For regularization, a geostatistical smoothness criterion may be introduced. Thus, the inversion results in the most likely spatial distribution of parameters that is consistent with the drawdown measurements and follows a predefined geostatistical model. Applying the restricted maximum likelihood approach, the parameters of the prior covariance function (i.e., the prior variance and correlation length) can be inferred from the data as well. We have applied the quasi-linear geostatistical approach of inverse modeling to drawdown measurements of multiple, overlapping pumping tests performed at the test site Krauthausen near Jülich, Germany. To reduce the computational costs, we have characterized the drawdown curves by their temporal moments. In the estimation of the geostatistical parameters, the measurement error of heads turned out to be of vital importance. The less we trust the data, the larger is the estimated correlation length, resulting in a more uniform distribution of transmissivity. Similar to conventional pumping test analysis, the data analysis point to a high variability of storativity although the properties making up storativity are known to be only mildly heterogeneous. We conjecture that the unresolved small-scale spatial variability of conductivity is mapped to variability of storativity. This is rather unfortunate since reliable field data on the variability of storativity are missing. The study underscores that structural information is difficult to extract from hydraulic data alone. Information on length scales and major deterministic features may be gained by geophysical surveying, even if rock-laws directly relating geophysical to hydraulic properties are considered unreliable.

  16. Synergies Between Grace and Regional Atmospheric Modeling Efforts

    NASA Astrophysics Data System (ADS)

    Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.

    2014-12-01

    In the meteorological community, efforts converge towards implementation of high-resolution (< 12km) data-assimilating regional climate modelling/monitoring systems based on numerical weather prediction (NWP) cores. This is driven by requirements of improving process understanding, better representation of land surface interactions, atmospheric convection, orographic effects, and better forecasting on shorter timescales. This is relevant for the GRACE community since (1) these models may provide improved atmospheric mass separation / de-aliasing and smaller topography-induced errors, compared to global (ECMWF-Op, ERA-Interim) data, (2) they inherit high temporal resolution from NWP models, (3) parallel efforts towards improving the land surface component and coupling groundwater models; this may provide realistic hydrological mass estimates with sub-diurnal resolution, (4) parallel efforts towards re-analyses, with the aim of providing consistent time series. (5) On the other hand, GRACE can help validating models and aids in the identification of processes needing improvement. A coupled atmosphere - land surface - groundwater modelling system is currently being implemented for the European CORDEX region at 12.5 km resolution, based on the TerrSysMP platform (COSMO-EU NWP, CLM land surface and ParFlow groundwater models). We report results from Springer et al. (J. Hydromet., accept.) on validating the water cycle in COSMO-EU using GRACE and precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.

  17. Estimating the Velocity and Transport of the East Australian Current using Argo, XBT, and Altimetry

    NASA Astrophysics Data System (ADS)

    Zilberman, N. V.; Roemmich, D. H.; Gille, S. T.

    2016-02-01

    Western Boundary Currents (WBCs) are the strongest ocean currents in the subtropics, and constitute the main pathway through which warm water-masses transit from low to mid-latitudes in the subtropical gyres of the Atlantic, Pacific, and Indian Oceans. Heat advection by WBCs has a significant impact on heat storage in subtropical mode waters formation regions and at high latitudes. The possibility that the magnitude of WBCs might change under greenhouse gas forcing has raised significant concerns. Improving our knowledge of WBC circulation is essential to accurately monitor the oceanic heat budget. Because of the narrowness and strong mesoscale variability of WBCs, estimation of WBC velocity and transport places heavy demands on any potential sampling scheme. One strategy for studying WBCs is to combine complementary data sources. High-resolution bathythermograph (HRX) profiles to 800-m have been collected along transects crossing the East Australian Current (EAC) system at 3-month nominal sampling intervals since 1991. EAC transects, with spatial sampling as fine as 10-15 km, are obtained off Brisbane (27°S) and Sydney (34°S), and crossing the related East Auckland Current north of Auckland. Here, HRX profiles collected since 2004 off Brisbane are merged with Argo float profiles and 1000 m trajectory-based velocities to expand HRX shear estimates to 2000-m and to estimate absolute geostrophic velocity and transport. A method for combining altimetric data with HRX and Argo profiles to mitigate temporal aliasing by the HRX transects and to reduce sampling errors in the HRX/Argo datasets is described. The HRX/Argo/altimetry-based estimate of the time-mean poleward alongshore transport of the EAC off Brisbane is 18.3 Sv, with a width of about 180 km, and of which 3.7 Sv recirculates equatorward on a similar spatial scale farther offshore. Geostrophic transport anomalies in the EAC at 27°S show variability of ± 1.3 Sv at interannual time scale related to ENSO. The present calculation is a case study that will be extended to other subtropical WBCs.

  18. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  19. Contribution of stimulus attributes to errors in duration and distance judgments--a developmental study.

    PubMed

    Matsuda, F; Lan, W C; Tanimura, R

    1999-02-01

    In Matsuda's 1996 study, 4- to 11-yr.-old children (N = 133) watched two cars running on two parallel tracks on a CRT display and judged whether their durations and distances were equal and, if not, which was larger. In the present paper, the relative contributions of the four critical stimulus attributes (whether temporal starting points, temporal stopping points, spatial starting points, and spatial stopping points were the same or different between two cars) to the production of errors were quantitatively estimated based on the data for rates of errors obtained by Matsuda. The present analyses made it possible not only to understand numerically the findings about qualitative characteristics of the critical attributes described by Matsuda, but also to add more detailed findings about them.

  20. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance.

    PubMed

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-12-07

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s(-1)) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s(-1)) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s(-1)) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm(-1) of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will lead to delivery of the high dose region at a position slightly posterior to the intended region for a supine patient. When possible, care should be taken to avoid imaging through a thick layer of fat for larger patients in US alignments or, if unavoidable, the spatial inaccuracies introduced by the artifact should be considered by the physician during the formulation of the treatment plan.

  1. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  2. The vertical structure of upper ocean variability at the Porcupine Abyssal Plain during 2012-2013

    NASA Astrophysics Data System (ADS)

    Damerell, Gillian M.; Heywood, Karen J.; Thompson, Andrew F.; Binetti, Umberto; Kaiser, Jan

    2016-05-01

    This study presents the characterization of variability in temperature, salinity and oxygen concentration, including the vertical structure of the variability, in the upper 1000 m of the ocean over a full year in the northeast Atlantic. Continuously profiling ocean gliders with vertical resolution between 0.5 and 1 m provide more information on temporal variability throughout the water column than time series from moorings with sensors at a limited number of fixed depths. The heat, salt and dissolved oxygen content are quantified at each depth. While the near surface heat content is consistent with the net surface heat flux, heat content of the deeper layers is driven by gyre-scale water mass changes. Below ˜150m, heat and salt content display intraseasonal variability which has not been resolved by previous studies. A mode-1 baroclinic internal tide is detected as a peak in the power spectra of water mass properties. The depth of minimum variability is at ˜415m for both temperature and salinity, but this is a depth of high variability for oxygen concentration. The deep variability is dominated by the intermittent appearance of Mediterranean Water, which shows evidence of filamentation. Susceptibility to salt fingering occurs throughout much of the water column for much of the year. Between about 700-900 m, the water column is susceptible to diffusive layering, particularly when Mediterranean Water is present. This unique ability to resolve both high vertical and temporal variability highlights the importance of intraseasonal variability in upper ocean heat and salt content, variations that may be aliased by traditional observing techniques.

  3. Artifacts in Sonography - Part 3.

    PubMed

    Bönhof, Jörg A; McLaughlin, Glen

    2018-06-01

    As a continuation of parts 1 1 and 2 2, this article discusses artifacts as caused by insufficient temporal resolution, artifacts in color and spectral Doppler sonography, and information regarding artifacts in sonography with contrast agents. There are artifacts that occur in B-mode sonography as well as in Doppler imaging methods and sonography with contrast agents, such as slice thickness artifacts and bow artifacts, shadows, mirroring, and artifacts due to refraction that appear, for example, as double images, because they are based on the same formation mechanisms. In addition, there are artifacts specific to Doppler sonography, such as the twinkling artifact, and method-based motion artifacts, such as aliasing, the ureteric jet, and due to tissue vibration. The artifacts specific to contrast mode include echoes from usually highly reflective structures that are not contrast bubbles ("leakage"). Contrast agent can also change the transmitting signal so that even structures not containing contrast agent are echogenic ("pseudoenhancement"). While artifacts can cause problems regarding differential diagnosis, they can also be useful for determining the diagnosis. Therefore, effective use of sonography requires both profound knowledge and skilled interpretation of artifacts. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Anisotropic scene geometry resampling with occlusion filling for 3DTV applications

    NASA Astrophysics Data System (ADS)

    Kim, Jangheon; Sikora, Thomas

    2006-02-01

    Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.

  5. Azimuthal filter to attenuate ground roll noise in the F-kx-ky domain for land 3D-3C seismic data with uneven acquisition geometry

    NASA Astrophysics Data System (ADS)

    Arevalo-Lopez, H. S.; Levin, S. A.

    2016-12-01

    The vertical component of seismic wave reflections is contaminated by surface noise such as ground roll and secondary scattering from near surface inhomogeneities. A common method for attenuating these, unfortunately often aliased, arrivals is via velocity filtering and/or multichannel stacking. 3D-3C acquisition technology provides two additional sources of information about the surface wave noise that we exploit here: (1) areal receiver coverage, and (2) a pair of horizontal components recorded at the same location as the vertical component. Areal coverage allows us to segregate arrivals at each individual receiver or group of receivers by direction. The horizontal components, having much less compressional reflection body wave energy than the vertical component, provide a template of where to focus our energies on attenuating the surface wave arrivals. (In the simplest setting, the vertical component is a scaled 90 degree phase rotated version of the radial horizontal arrival, a potential third possible lever we have not yet tried to integrate.) The key to our approach is to use the magnitude of the horizontal components to outline a data-adaptive "velocity" filter region in the w-Kx-Ky domain. The big advantage for us is that even in the presence of uneven receiver geometries, the filter automatically tracks through aliasing without manual sculpting and a priori velocity and dispersion estimation. The method was applied to an aliased synthetic dataset based on a five layer earth model which also included shallow scatterers to simulate near-surface inhomogeneities and successfully removed both the ground roll and scatterers from the vertical component (Figure 1).

  6. On the wave number 2 eastward propagating quasi 2 day wave at middle and high latitudes

    NASA Astrophysics Data System (ADS)

    Gu, Sheng-Yang; Liu, Han-Li; Pedatella, N. M.; Dou, Xiankang; Liu, Yu

    2017-04-01

    The temperature and wind data sets from the ensemble data assimilation version of the Whole Atmosphere Community Climate Model + Data Assimilation Research Testbed (WACCM + DART) developed at the National Center for Atmospheric Research (NCAR) are utilized to study the seasonal variability of the eastward quasi 2 day wave (QTDW) with zonal wave number 2 (E2) during 2007. The aliasing ratio of E2 from wave number 3 (W3) in the synoptic WACCM data set is a constant value of 4 × 10-6% due to its uniform sampling pattern, whereas the aliasing is latitudinally dependent if the WACCM fields are sampled asynoptically based on the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) sampling. The aliasing ratio based on SABER sampling is 75% at 40°S during late January, where and when W3 peaks. The analysis of the synoptic WACCM data set shows that the E2 is in fact a winter phenomenon, which peaks in the stratosphere and lower mesosphere at high latitudes. In the austral winter period, the amplitudes of E2 can reach 10 K, 20 m/s, and 30 m/s for temperature, zonal, and meridional winds, respectively. In the boreal winter period, the wave perturbations are only one third as strong as those in austral winter. Diagnostic analysis also shows that the mean flow instabilities in the winter upper mesosphere polar region provide sources for the amplification of E2. This is different from the westward QTDWs, whose amplifications are related to the summer easterly jet. In addition, the E2 also peaks at lower altitude than the westward modes.

  7. Diffusion in realistic biophysical systems can lead to aliasing effects in diffusion spectrum imaging.

    PubMed

    Lacerda, Luis M; Sperl, Jonathan I; Menzel, Marion I; Sprenger, Tim; Barker, Gareth J; Dell'Acqua, Flavio

    2016-12-01

    Diffusion spectrum imaging (DSI) is an imaging technique that has been successfully applied to resolve white matter crossings in the human brain. However, its accuracy in complex microstructure environments has not been well characterized. Here we have simulated different tissue configurations, sampling schemes, and processing steps to evaluate DSI performances' under realistic biophysical conditions. A novel approach to compute the orientation distribution function (ODF) has also been developed to include biophysical constraints, namely integration ranges compatible with axial fiber diffusivities. Performed simulations identified several DSI configurations that consistently show aliasing artifacts caused by fast diffusion components for both isotropic diffusion and fiber configurations. The proposed method for ODF computation showed some improvement in reducing such artifacts and improving the ability to resolve crossings, while keeping the quantitative nature of the ODF. In this study, we identified an important limitation of current DSI implementations, specifically the presence of aliasing due to fast diffusion components like those from pathological tissues, which are not well characterized, and can lead to artifactual fiber reconstructions. To minimize this issue, a new way of computing the ODF was introduced, which removes most of these artifacts and offers improved angular resolution. Magn Reson Med 76:1837-1847, 2016. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  8. On representation of temporal variability in electricity capacity planning models

    DOE PAGES

    Merrick, James H.

    2016-08-23

    This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less

  9. On representation of temporal variability in electricity capacity planning models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrick, James H.

    This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less

  10. Avulsion research using flume experiments and highly accurate and temporal-rich SfM datasets

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Bertoldi, W.; Vitti, A.

    2017-12-01

    SfM's ability to produce high-quality, large-scale digital elevation models (DEMs) of complicated and rapidly evolving systems has made it a valuable technique for low-budget researchers and practitioners. While SfM has provided valuable datasets that capture single-flood event DEMs, there is an increasing scientific need to capture higher temporal resolution datasets that can quantify the evolutionary processes instead of pre- and post-flood snapshots. However, flood events' dangerous field conditions and image matching challenges (e.g. wind, rain) prevent quality SfM-image acquisition. Conversely, flume experiments offer opportunities to document flood events, but achieving consistent and accurate DEMs to detect subtle changes in dry and inundated areas remains a challenge for SfM (e.g. parabolic error signatures).This research aimed at investigating the impact of naturally occurring and manipulated avulsions on braided river morphology and on the encroachment of floodplain vegetation, using laboratory experiments. This required DEMs with millimeter accuracy and precision and at a temporal resolution to capture the processes. SfM was chosen as it offered the most practical method. Through redundant local network design and a meticulous ground control point (GCP) survey with a Leica Total Station in red laser configuration (reported 2 mm accuracy), the SfM residual errors compared to separate ground truthing data produced mean errors of 1.5 mm (accuracy) and standard deviations of 1.4 mm (precision) without parabolic error signatures. Lighting conditions in the flume were limited to uniform, oblique, and filtered LED strips, which removed glint and thus improved bed elevation mean errors to 4 mm, but errors were further reduced by means of an open source software for refraction correction. The obtained datasets have provided the ability to quantify how small flood events with avulsion can have similar morphologic and vegetation impacts as large flood events without avulsion. Further, this research highlights the potential application of SfM in the laboratory and ability to document physical and biological processes at greater spatial and temporal resolution. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917

  11. Optical Oversampled Analog-to-Digital Conversion

    DTIC Science & Technology

    1992-06-29

    hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions

  12. Monitoring gait in multiple sclerosis with novel wearable motion sensors.

    PubMed

    Moon, Yaejin; McGinnis, Ryan S; Seagers, Kirsten; Motl, Robert W; Sheth, Nirav; Wright, John A; Ghaffari, Roozbeh; Sosnoff, Jacob J

    2017-01-01

    Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6-2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic.

  13. Improving the spatial and temporal resolution with quantification of uncertainty and errors in earth observation data sets using Data Interpolating Empirical Orthogonal Functions methodology

    NASA Astrophysics Data System (ADS)

    El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander

    2016-04-01

    There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002

  14. Sediment movement along the U.S. east coast continental shelf-I. Estimates of bottom stress using the Grant-Madsen model and near-bottom wave and current measurements

    USGS Publications Warehouse

    Lyne, V.D.; Butman, B.; Grant, W.D.

    1990-01-01

    Bottom stress is calculated for several long-term time-series observations, made on the U.S. east coast continental shelf during winter, using the wave-current interaction and moveable bed models of Grant and Madsen (1979, Journal of Geophysical Research, 84, 1797-1808; 1982, Journal of Geophysical Research, 87, 469-482). The wave and current measurements were obtained by means of a bottom tripod system which measured current using a Savonius rotor and vane and waves by means of a pressure sensor. The variables were burst sampled about 10% of the time. Wave energy was reasonably resolved, although aliased by wave groupiness, and wave period was accurate to 1-2 s during large storms. Errors in current speed and direction depend on the speed of the mean current relative to the wave current. In general, errors in bottom stress caused by uncertainties in measured current speed and wave characteristics were 10-20%. During storms, the bottom stress calculated using the Grant-Madsen models exceeded stress computed from conventional drag laws by a factor of about 1.5 on average and 3 or more during storm peaks. Thus, even in water as deep as 80 m, oscillatory near-bottom currents associated with surface gravity waves of period 12 s or longer will contribute substantially to bottom stress. Given that the Grant-Madsen model is correct, parameterizations of bottom stress that do not incorporate wave effects will substantially underestimate stress and sediment transport in this region of the continental shelf.

  15. CTER-rapid estimation of CTF parameters with error assessment.

    PubMed

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Multiple Hypothesis Correlation for Space Situational Awareness

    DTIC Science & Technology

    2011-08-29

    formulations with anti-aliasing through hybrid approaches such as the Drizzle algorithm [43] all the way up through to image superresolution techniques. Most... superresolution techniques. Second, given a set of images, either directly from the sensor or preprocessed using the above techniques, we showed how

  17. 78 FR 52553 - Privacy Act of 1974; Department of Homeland Security/ALL-035 Common Entity Index Prototype System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ... data elements: Full Name; Alias(es); Gender; Date of Birth; Country of Birth; Country of Citizenship... locked drawer behind a locked door. The records may be stored on magnetic disc, tape, or digital media...

  18. Tri-linear color multi-linescan sensor with 200 kHz line rate

    NASA Astrophysics Data System (ADS)

    Schrey, Olaf; Brockherde, Werner; Nitta, Christian; Bechen, Benjamin; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.

    2016-11-01

    In this paper we present a newly developed linear CMOS high-speed line-scanning sensor realized in a 0.35 μm CMOS OPTO process for line-scan with 200 kHz true RGB and 600 kHz monochrome line rate, respectively. In total, 60 lines are integrated in the sensor allowing for electronic position adjustment. The lines are read out in rolling shutter manner. The high readout speed is achieved by a column-wise organization of the readout chain. At full speed, the sensor provides RGB color images with a spatial resolution down to 50 μm. This feature enables a variety of applications like quality assurance in print inspection, real-time surveillance of railroad tracks, in-line monitoring in flat panel fabrication lines and many more. The sensor has a fill-factor close to 100%, preventing aliasing and color artefacts. Hence the tri-linear technology is robust against aliasing ensuring better inspection quality and thus less waste in production lines.

  19. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  20. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  1. Turbulent Channel Flow Measurements with a Nano-scale Thermal Anemometry Probe

    NASA Astrophysics Data System (ADS)

    Bailey, Sean; Witte, Brandon

    2014-11-01

    Using a Nano-scale Thermal Anemometry Probe (NSTAP), streamwise velocity was measured in a turbulent channel flow wind tunnel at Reynolds numbers ranging from Reτ = 500 to Reτ = 4000 . Use of these probes results in the a sensing-length-to-viscous-length-scale ratio of just 5 at the highest Reynolds number measured. Thus measured results can be considered free of spatial filtering effects. Point statistics are compared to recently published DNS and LDV data at similar Reynolds numbers and the results are found to be in good agreement. However, comparison of the measured spectra provide further evidence of aliasing at long wavelengths due to application of Taylor's frozen flow hypothesis, with increased aliasing evident with increasing Reynolds numbers. In addition to conventional point statistics, the dissipative scales of turbulence are investigated with focus on the wall-dependent scaling. Results support the existence of a universal pdf distribution of these scales once scaled to account for large-scale anisotropy. This research is supported by KSEF Award KSEF-2685-RDE-015.

  2. Sampling theory for asynoptic satellite observations. I Space-time spectra, resolution, and aliasing. II - Fast Fourier synoptic mapping

    NASA Technical Reports Server (NTRS)

    Salby, M. L.

    1982-01-01

    An evaluation of the information content of asynoptic data taken in the form of nadir sonde and limb scan observations is presented, and a one-to-one correspondence is established between the alias-free data and twice-daily synoptic maps. Attention is given to space and time limitations of sampling and the orbital geometry is discussed. The sampling pattern is demonstrated to determine unique space-time spectra at all wavenumbers and frequencies. Spectral resolution and aliasing are explored, while restrictions on sampling and information content are defined. It is noted that irregular sampling at high latitudes produces spurious contamination effects. An Asynoptic Sampling Theorem is thereby formulated, as is a Synoptic Retrieval Theorem, in the second part of the article. In the latter, a procedure is developed for retrieving the unique correspondence between the asymptotic data and the synoptic maps. Applications examples are provided using data from the Nimbus-6 satellite.

  3. Analytical three-point Dixon method: With applications for spiral water-fat imaging.

    PubMed

    Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G

    2016-02-01

    The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.

  4. New MHD feedback control schemes using the MARTe framework in RFX-mod

    NASA Astrophysics Data System (ADS)

    Piron, Chiara; Manduchi, Gabriele; Marrelli, Lionello; Piovesan, Paolo; Zanca, Paolo

    2013-10-01

    Real-time feedback control of MHD instabilities is a topic of major interest in magnetic thermonuclear fusion, since it allows to optimize a device performance even beyond its stability bounds. The stability properties of different magnetic configurations are important test benches for real-time control systems. RFX-mod, a Reversed Field Pinch experiment that can also operate as a tokamak, is a well suited device to investigate this topic. It is equipped with a sophisticated magnetic feedback system that controls MHD instabilities and error fields by means of 192 active coils and a corresponding grid of sensors. In addition, the RFX-mod control system has recently gained new potentialities thanks to the introduction of the MARTe framework and of a new CPU architecture. These capabilities allow to study new feedback algorithms relevant to both RFP and tokamak operation and to contribute to the debate on the optimal feedback strategy. This work focuses on the design of new feedback schemes. For this purpose new magnetic sensors have been explored, together with new algorithms that refine the de-aliasing computation of the radial sideband harmonics. The comparison of different sensor and feedback strategy performance is described in both RFP and tokamak experiments.

  5. Contour-Based Corner Detection and Classification by Using Mean Projection Transform

    PubMed Central

    Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein

    2014-01-01

    Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354

  6. Contour-based corner detection and classification by using mean projection transform.

    PubMed

    Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein

    2014-02-28

    Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.

  7. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.

  8. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    PubMed Central

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-01-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Conclusions Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode. PMID:26352168

  9. Temporal prediction errors modulate task-switching performance

    PubMed Central

    Limongi, Roberto; Silva, Angélica M.; Góngora-Costa, Begoña

    2015-01-01

    We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus’ onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as “executive control” is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching. PMID:26379568

  10. The function of the left anterior temporal pole: evidence from acute stroke and infarct volume

    PubMed Central

    Tsapkini, Kyrana; Frangakis, Constantine E.

    2011-01-01

    The role of the anterior temporal lobes in cognition and language has been much debated in the literature over the last few years. Most prevailing theories argue for an important role of the anterior temporal lobe as a semantic hub or a place for the representation of unique entities such as proper names of peoples and places. Lately, a few studies have investigated the role of the most anterior part of the left anterior temporal lobe, the left temporal pole in particular, and argued that the left anterior temporal pole is the area responsible for mapping meaning on to sound through evidence from tasks such as object naming. However, another recent study indicates that bilateral anterior temporal damage is required to cause a clinically significant semantic impairment. In the present study, we tested these hypotheses by evaluating patients with acute stroke before reorganization of structure–function relationships. We compared a group of 20 patients with acute stroke with anterior temporal pole damage to a group of 28 without anterior temporal pole damage matched for infarct volume. We calculated the average percent error in auditory comprehension and naming tasks as a function of infarct volume using a non-parametric regression method. We found that infarct volume was the only predictive variable in the production of semantic errors in both auditory comprehension and object naming tasks. This finding favours the hypothesis that left unilateral anterior temporal pole lesions, even acutely, are unlikely to cause significant deficits in mapping meaning to sound by themselves, although they contribute to networks underlying both naming and comprehension of objects. Therefore, the anterior temporal lobe may be a semantic hub for object meaning, but its role must be represented bilaterally and perhaps redundantly. PMID:21685458

  11. Temporal prediction errors modulate task-switching performance.

    PubMed

    Limongi, Roberto; Silva, Angélica M; Góngora-Costa, Begoña

    2015-01-01

    We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus' onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as "executive control" is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching.

  12. Temporal steering and security of quantum key distribution with mutually unbiased bases against individual attacks

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Černoch, Antonín; Lemr, Karel; Miranowicz, Adam; Nori, Franco

    2016-06-01

    Temporal steering, which is a temporal analog of Einstein-Podolsky-Rosen steering, refers to temporal quantum correlations between the initial and final state of a quantum system. Our analysis of temporal steering inequalities in relation to the average quantum bit error rates reveals the interplay between temporal steering and quantum cloning, which guarantees the security of quantum key distribution based on mutually unbiased bases against individual attacks. The key distributions analyzed here include the Bennett-Brassard 1984 protocol and the six-state 1998 protocol by Bruss. Moreover, we define a temporal steerable weight, which enables us to identify a kind of monogamy of temporal correlation that is essential to quantum cryptography and useful for analyzing various scenarios of quantum causality.

  13. Identifying Optimal Temporal Scale for the Correlation of AOD and Ground Measurements of PM2.5 to Improve the Model Performance in a Real-time Air Quality Estimation System

    NASA Technical Reports Server (NTRS)

    Li, Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey C.; Crosson, William; Rickman, Douglas; Limaye, Ashutosh

    2009-01-01

    Aerosol optical depth (AOD), an indirect estimate of particle matter using satellite observations, has shown great promise in improving estimates of PM 2.5 air quality surface. Currently, few studies have been conducted to explore the optimal way to apply AOD data to improve the model accuracy of PM 2.5 surface estimation in a real-time air quality system. We believe that two major aspects may be worthy of consideration in that area: 1) the approach to integrate satellite measurements with ground measurements in the pollution estimation, and 2) identification of an optimal temporal scale to calculate the correlation of AOD and ground measurements. This paper is focused on the second aspect on the identifying the optimal temporal scale to correlate AOD with PM2.5. Five following different temporal scales were chosen to evaluate their impact on the model performance: 1) within the last 3 days, 2) within the last 10 days, 3) within the last 30 days, 4) within the last 90 days, and 5) the time period with the highest correlation in a year. The model performance is evaluated for its accuracy, bias, and errors based on the following selected statistics: the Mean Bias, the Normalized Mean Bias, the Root Mean Square Error, Normalized Mean Error, and the Index of Agreement. This research shows that the model with the temporal scale of within the last 30 days displays the best model performance in this study area using 2004 and 2005 data sets.

  14. Addition of fornix transection to frontal-temporal disconnection increases the impairment in object-in-place memory in macaque monkeys.

    PubMed

    Wilson, C R E; Baxter, M G; Easton, A; Gaffan, D

    2008-04-01

    Both frontal-inferotemporal disconnection and fornix transection (Fx) in the monkey impair object-in-place scene learning, a model of human episodic memory. If the contribution of the fornix to scene learning is via interaction with or modulation of frontal-temporal interaction--that is, if they form a unitary system--then Fx should have no further effect when added to frontal-temporal disconnection. However, if the contribution of the fornix is to some extent distinct, then fornix lesions may produce an additional deficit in scene learning beyond that caused by frontal-temporal disconnection. To distinguish between these possibilities, we trained three male rhesus monkeys on the object-in-place scene-learning task. We tested their learning on the task following frontal-temporal disconnection, achieved by crossed unilateral aspiration of the frontal cortex in one hemisphere and the inferotemporal cortex in the other, and again following the addition of Fx. The monkeys were significantly impaired in scene learning following frontal-temporal disconnection, and furthermore showed a significant increase in this impairment following the addition of Fx, from 32.8% error to 40.5% error (chance = 50%). The increased impairment following the addition of Fx provides evidence that the fornix and frontal-inferotemporal interaction make distinct contributions to episodic memory.

  15. Noise in two-color electronic distance meter measurements revisited

    USGS Publications Warehouse

    Langbein, J.

    2004-01-01

    Frequent, high-precision geodetic data have temporally correlated errors. Temporal correlations directly affect both the estimate of rate and its standard error; the rate of deformation is a key product from geodetic measurements made in tectonically active areas. Various models of temporally correlated errors are developed and these provide relations between the power spectral density and the data covariance matrix. These relations are applied to two-color electronic distance meter (EDM) measurements made frequently in California over the past 15-20 years. Previous analysis indicated that these data have significant random walk error. Analysis using the noise models developed here indicates that the random walk model is valid for about 30% of the data. A second 30% of the data can be better modeled with power law noise with a spectral index between 1 and 2, while another 30% of the data can be modeled with a combination of band-pass-filtered plus random walk noise. The remaining 10% of the data can be best modeled as a combination of band-pass-filtered plus power law noise. This band-pass-filtered noise is a product of an annual cycle that leaks into adjacent frequency bands. For time spans of more than 1 year these more complex noise models indicate that the precision in rate estimates is better than that inferred by just the simpler, random walk model of noise.

  16. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    PubMed

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.

  17. Errors Recruit both Cognitive and Emotional Monitoring Systems: Simultaneous Intracranial Recordings in the Dorsal Anterior Cingulate Gyrus and Amygdala Combined with fMRI

    ERIC Educational Resources Information Center

    Pourtois, Gilles; Vocat, Roland; N'Diaye, Karim; Spinelli, Laurent; Seeck, Margitta; Vuilleumier, Patrik

    2010-01-01

    We studied error monitoring in a human patient with unique implantation of depth electrodes in both the left dorsal cingulate gyrus and medial temporal lobe prior to surgery. The patient performed a speeded go/nogo task and made a substantial number of commission errors (false alarms). As predicted, intracranial Local Field Potentials (iLFPs) in…

  18. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  19. Generalized Aliasing as a Basis for Program Analysis Tools

    DTIC Science & Technology

    2000-11-01

    5 W 5 X LV DQ HGJH LQ*7KHQWKHJUDSKLVSDUWLWLRQHGLQWRVWURQJO\\FRQQHFWHGFRPSRQHQWVFDOOHG FOXVWHUOHYHOV7KLVSDUWLWLRQLVZULWWHQ6...IRUPWEF XLVGLVSOD\\HGDVDVROLG HGJH IURPW¶VQRGHWRX¶VQRGHODEHOOHGZLWKEF$ FRQVWUDLQWRIWKHIRUPW )L XLVGLVSOD\\HGDVDGRWWHG HGJH ...0 1 )LVLQWKH935^ )RUHDFKQRGH1LQ*^ ,I0 1 )!0 1 LVLQWKH935^ ,IWKHUHLVQR HGJH IURP1

  20. Aliasing of the Schumann resonance background signal by sprite-associated Q-bursts

    NASA Astrophysics Data System (ADS)

    Guha, Anirban; Williams, Earle; Boldi, Robert; Sátori, Gabriella; Nagy, Tamás; Bór, József; Montanyà, Joan; Ortega, Pascal

    2017-12-01

    The Earth's naturally occurring Schumann resonances (SR) are composed of a quasi-continuous background component and a larger-amplitude, short-duration transient component, otherwise called 'Q-burst' (Ogawa et al., 1967). Sprites in the mesosphere are also known to accompany the energetic positive ground flashes that launch the Q-bursts (Boccippio et al., 1995). Spectra of the background Schumann Resonances (SR) require a natural stabilization period of ∼10-12 min for the three conspicuous modal parameters to be derived from Lorentzian fitting. Before the spectra are computed and the fitting process is initiated, the raw time series data need to be properly filtered for local cultural noise, narrow band interference as well as for large transients in the form of global Q-bursts. Mushtak and Williams (2009) describe an effective technique called Isolated Lorentzian (I-LOR), in which, the contributions from local cultural and various other noises are minimized to a great extent. An automated technique based on median filtering of time series data has been developed. These special lightning flashes are known to have greater contribution in the ELF range (below 1 kHz) compared to general negative CG strikes (Huang et al., 1999; Cummer et al., 2006). The global distributions of these Q-bursts have been studied by Huang et al. (1999) Rhode Island, USA by wave impedance methods from single station ELF measurements at Rhode Island, USA and from Japan Hobara et al. (2006). The present work aims to demonstrate the effect of Q-bursts on SR background spectra using GPS time-stamped observation of TLEs. It is observed that the Q-bursts selected for the present work do alias the background spectra over a 5-s period, though the amplitudes of these Q-bursts are far below the background threshold of 16 Core Standard Deviation (CSD) so that they do not strongly alias the background spectra of 10-12 min duration. The examination of one exceptional Q-burst shows that appreciable spectral aliasing can occur even when 12-min spectral integrations are considered. The statistical result shows that for a 12-min spectrum, events above 16 CSD are capable of producing significant frequency aliasing of the modal frequencies, although the intensity aliasing might have a negligible effect unless the events are exceptionally large (∼200 CSD). The spectral CSD methodology may be used to extract the time of arrival of the Q-burst transients. This methodology may be combined with a hyperbolic ranging, thus becoming an effective tool to detect TLEs globally with a modest number of networked observational stations.

  1. Evaluation and error apportionment of an ensemble of atmospheric chemistry transport modeling systems: multivariable temporal and spatial breakdown

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  2. Complementary roles for amygdala and periaqueductal gray in temporal-difference fear learning.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2009-01-01

    Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the rate at which these are learned about. These experiments used a serial compound conditioning design to determine the roles of basolateral amygdala (BLA) NMDA receptors and ventrolateral midbrain periaqueductal gray (vlPAG) mu-opioid receptors (MOR) in predictive fear learning. Rats received a three-stage design, which arranged for both positive and negative prediction errors producing bidirectional changes in fear learning within the same subjects during the test stage. Intra-BLA infusion of the NR2B receptor antagonist Ifenprodil prevented all learning. In contrast, intra-vlPAG infusion of the MOR antagonist CTAP enhanced learning in response to positive predictive error but impaired learning in response to negative predictive error--a pattern similar to Hebbian learning and an indication that fear learning had been divorced from predictive error. These findings identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG MOR in temporal-difference predictive fear learning.

  3. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.

    PubMed

    Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li

    2014-11-17

    All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.

  5. Music Recognition in Frontotemporal Lobar Degeneration and Alzheimer Disease

    PubMed Central

    Johnson, Julene K; Chang, Chiung-Chih; Brambati, Simona M; Migliaccio, Raffaella; Gorno-Tempini, Maria Luisa; Miller, Bruce L; Janata, Petr

    2013-01-01

    Objective To compare music recognition in patients with frontotemporal dementia, semantic dementia, Alzheimer disease, and controls and to evaluate the relationship between music recognition and brain volume. Background Recognition of familiar music depends on several levels of processing. There are few studies about how patients with dementia recognize familiar music. Methods Subjects were administered tasks that assess pitch and melody discrimination, detection of pitch errors in familiar melodies, and naming of familiar melodies. Results There were no group differences on pitch and melody discrimination tasks. However, patients with semantic dementia had considerable difficulty naming familiar melodies and also scored the lowest when asked to identify pitch errors in the same melodies. Naming familiar melodies, but not other music tasks, was strongly related to measures of semantic memory. Voxel-based morphometry analysis of brain MRI showed that difficulty in naming songs was associated with the bilateral temporal lobes and inferior frontal gyrus, whereas difficulty in identifying pitch errors in familiar melodies correlated with primarily the right temporal lobe. Conclusions The results support a view that the anterior temporal lobes play a role in familiar melody recognition, and that musical functions are affected differentially across forms of dementia. PMID:21617528

  6. Using Bayesian hierarchical models to better understand nitrate sources and sinks in agricultural watersheds.

    PubMed

    Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan

    2016-11-15

    Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2  = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Spatial and temporal temperature distribution optimization for a geostationary antenna

    NASA Technical Reports Server (NTRS)

    Tsuyuki, G.; Miyake, R.

    1992-01-01

    The Geostationary Microwave Precipitation Radiometer antenna is considered and a thermal design analysis is performed to determine a design that would minimize on-orbit antenna temporal and spatial temperature gradients. The final design is based on an optically opaque radome which covered the antenna. The average orbital antenna temperature is found to be 9 C with maximum temporal and spatial variations of 34 C and 1 C, respectively. An independent thermal distortion analysis showed that this temporal variation would give an antenna figure error of 14 microns.

  8. 76 FR 34720 - Chemical Facility Anti-Terrorism Standards Personnel Surety Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ...; Date of birth; Place of birth; Gender; Citizenship; Passport information; Visa information; Alien... birth; and c. Citizenship or Gender. The Department will require that high-risk chemical facilities.... Aliases; b. Gender (for Non-U.S. persons); c. Place of birth; and d. DHS Redress Number. In lieu of...

  9. 77 FR 28250 - Entity List Additions; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... person who was added under the destination of Pakistan to clarify the text is the address of this person... follows: Pakistan (1) Jalaluddin Haqqani, a.k.a., the following seven aliases: --General Jalaluddin... Jalaluddin. --Miram Shah, Pakistan. United Arab Emirates (1) Al Maskah Used Car and Spare Parts, Maliha Road...

  10. Android REST Client Application to View, Collect, and Exploit Video and Image Data

    DTIC Science & Technology

    2013-09-01

    Superresolution Image Reconstruction From a Sequence of Aliased Imagery. Applied Optics 2006, 45 (21), 5073–5085. 3, Driggers, R. G.; Krapels, K. A...Murrill, S.; Young, S. S.; Theilke, M.; Schuler, J. M. Superresolution Performance for Undersampled Imagers. Optical Engineering 2005, 44 (01). 4. Young

  11. 75 FR 62173 - In the Matter of the Review of the Designation of Jemaah Islamiya (JI and Other Aliases) as a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... maintained. This determination shall be published in the Federal Register. Dated: September 28, 2010. Hillary Rodham Clinton, Secretary of State. [FR Doc. 2010-25333 Filed 10-6-10; 8:45 am] BILLING CODE 4710-10-P ...

  12. Enhancing National Security by Strengthening the Legal Immigration System

    DTIC Science & Technology

    2009-12-01

    Ramzi Yousef traveled from Pakistan to New York’s John F. Kennedy ( JFK ) airport using aliases. Both men possessed a variety of documents, including...both Yousef and another conspirator, Eyad Ismoil, to JFK airport . Yousef used a false passport to escape to Pakistan, and Ismoil fled to Jordan

  13. A functional model for characterizing long-distance movement behaviour

    USGS Publications Warehouse

    Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.

    2016-01-01

    Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.

  14. A framework for simulating map error in ecosystem models

    Treesearch

    Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard

    2014-01-01

    The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...

  15. Microsurgical and Endoscopic Anatomy for Intradural Temporal Bone Drilling and Applications of the Electromagnetic Navigation System: Various Extensions of the Retrosigmoid Approach.

    PubMed

    Matsushima, Ken; Komune, Noritaka; Matsuo, Satoshi; Kohno, Michihiro

    2017-07-01

    The use of the retrosigmoid approach has recently been expanded by several modifications, including the suprameatal, transmeatal, suprajugular, and inframeatal extensions. Intradural temporal bone drilling without damaging vital structures inside or beside the bone, such as the internal carotid artery and jugular bulb, is a key step for these extensions. This study aimed to examine the microsurgical and endoscopic anatomy of the extensions of the retrosigmoid approach and to evaluate the clinical feasibility of an electromagnetic navigation system during intradural temporal bone drilling. Five temporal bones and 8 cadaveric cerebellopontine angles were examined to clarify the anatomy of retrosigmoid intradural temporal bone drilling. Twenty additional cerebellopontine angles were dissected in a clinical setting with an electromagnetic navigation system while measuring the target registration errors at 8 surgical landmarks on and inside the temporal bone. Retrosigmoid intradural temporal bone drilling expanded the surgical exposure to allow access to the petroclival and parasellar regions (suprameatal), internal acoustic meatus (transmeatal), upper jugular foramen (suprajugular), and petrous apex (inframeatal). The electromagnetic navigation continuously guided the drilling without line of sight limitation, and its small devices were easily manipulated in the deep and narrow surgical field in the posterior fossa. Mean target registration error was less than 0.50 mm during these procedures. The combination of endoscopic and microsurgical techniques aids in achieving optimal exposure for retrosigmoid intradural temporal bone drilling. The electromagnetic navigation system had clear advantages with acceptable accuracy including the usability of small devices without line of sight limitation. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans

    PubMed Central

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894

  17. Monitoring gait in multiple sclerosis with novel wearable motion sensors

    PubMed Central

    McGinnis, Ryan S.; Seagers, Kirsten; Motl, Robert W.; Sheth, Nirav; Wright, John A.; Ghaffari, Roozbeh; Sosnoff, Jacob J.

    2017-01-01

    Background Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. Methods A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Results Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6–2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). Conclusion BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic. PMID:28178288

  18. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  19. The vertical structure of upper ocean variability at the Porcupine Abyssal Plain during 2012–2013

    PubMed Central

    Heywood, Karen J.; Thompson, Andrew F.; Binetti, Umberto; Kaiser, Jan

    2016-01-01

    Abstract This study presents the characterization of variability in temperature, salinity and oxygen concentration, including the vertical structure of the variability, in the upper 1000 m of the ocean over a full year in the northeast Atlantic. Continuously profiling ocean gliders with vertical resolution between 0.5 and 1 m provide more information on temporal variability throughout the water column than time series from moorings with sensors at a limited number of fixed depths. The heat, salt and dissolved oxygen content are quantified at each depth. While the near surface heat content is consistent with the net surface heat flux, heat content of the deeper layers is driven by gyre‐scale water mass changes. Below ∼150m, heat and salt content display intraseasonal variability which has not been resolved by previous studies. A mode‐1 baroclinic internal tide is detected as a peak in the power spectra of water mass properties. The depth of minimum variability is at ∼415m for both temperature and salinity, but this is a depth of high variability for oxygen concentration. The deep variability is dominated by the intermittent appearance of Mediterranean Water, which shows evidence of filamentation. Susceptibility to salt fingering occurs throughout much of the water column for much of the year. Between about 700–900 m, the water column is susceptible to diffusive layering, particularly when Mediterranean Water is present. This unique ability to resolve both high vertical and temporal variability highlights the importance of intraseasonal variability in upper ocean heat and salt content, variations that may be aliased by traditional observing techniques. PMID:27840785

  20. Prediction Errors but Not Sharpened Signals Simulate Multivoxel fMRI Patterns during Speech Perception

    PubMed Central

    Davis, Matthew H.

    2016-01-01

    Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209

  1. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  2. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  3. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  4. Are there meaningful individual differences in temporal inconsistency in self-reported personality?

    PubMed

    Soubelet, Andrea; Salthouse, Timothy A; Oishi, Shigehiro

    2014-11-01

    The current project had three goals. The first was to examine whether it is meaningful to refer to across-time variability in self-reported personality as an individual differences characteristic. The second was to investigate whether negative affect was associated with variability in self-reported personality, while controlling for mean levels, and correcting for measurement errors. The third goal was to examine whether variability in self-reported personality would be larger among young adults than among older adults, and whether the relation of variability with negative affect would be stronger at older ages than at younger ages. Two moderately large samples of participants completed the International Item Pool Personality questionnaire assessing the Big Five personality dimensions either twice or thrice, in addition to several measures of negative affect. Results were consistent with the hypothesis that within-person variability in self-reported personality is a meaningful individual difference characteristic. Some people exhibited greater across-time variability than others after removing measurement error, and people who showed temporal instability in one trait also exhibited temporal instability across the other four traits. However, temporal variability was not related to negative affect, and there was no evidence that either temporal variability or its association with negative affect varied with age.

  5. 77 FR 58006 - Addition of Certain Persons to the Entity List; Removal of Person From the Entity List Based on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ...; (5) Chinese Academy of Engineering Physics, a.k.a., the following seventeen aliases: --Ninth Academy...; --Southwest Institute of Explosives and Chemical Engineering; --Southwest Institute of Fluid Physics...; --Southwest Institute of Materials; --Southwest Institute of Nuclear Physics and Chemistry (a.k.a., China...

  6. 75 FR 9238 - Privacy Act of 1974; Department of Homeland Security United States Immigration Customs and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-01

    ... place of birth; passport and other travel document information; nationality; aliases; Alien Registration... date and time of a successful collection and confirmation from the FBI that the sample was able to be... alleged violations of criminal or immigration law (location, date, time, event category, types of criminal...

  7. Aliasing, Ambiguities, and Interpolation in Wideband Direction-of-Arrival Estimation Using Antenna Arrays

    ERIC Educational Resources Information Center

    Ho, Chung-Cheng

    2016-01-01

    For decades, direction finding has been an important research topic in many applications such as radar, location services, and medical diagnosis for treatment. For those kinds of applications, the precision of location estimation plays an important role, since that, having a higher precision location estimate method is always desirable. Although…

  8. Regional Characteristics for Interpreting Inverted Echo Sounder (IES) observations

    DTIC Science & Technology

    1987-06-01

    rounding the IESs. There are seasonal warming and and ideally, we should like to have a series of hydro- cooling effects which may be missed with...thermocline This shallo, sanabihlit\\ , Is lkck to be spatialk and temporall , aliased: it ma\\ 01 ." b assoi ated ws.ith internal \\ awes or frontal tluctua

  9. Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness

    NASA Astrophysics Data System (ADS)

    Hardy, Tyler J.; Cain, Stephen C.

    2016-05-01

    The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.

  10. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  11. Ground roll attenuation by synchrosqueezed curvelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Chen, Yangkang; Ma, Jianwei

    2018-04-01

    Ground roll is a type of coherent noise in land seismic data that has low frequency, low velocity and high amplitude. It damages reflection events that contain important information about subsurface structures, hence the removal of ground roll is a crucial step in seismic data processing. A suitable transform is needed for removal of ground roll. Curvelet transform is an effective sparse transform that optimally represents seismic events. In addition, the curvelets can provide a multiscale and multidirectional decomposition of the input data in time-frequency and angular domain, which can help distinguish between ground roll and useful signals. In this paper, we apply synchrosqueezed curvelet transform (SSCT) for ground roll attenuation. The synchrosqueezing technique in SSCT is used to precisely reallocate the energy of local wave vectors in order to separate ground roll from the original data with higher resolution and higher fidelity. Examples of synthetic and field seismic data reveal that SSCT performs well in the suppression of aliased and non-aliased ground roll while preserving reflection waves, in comparison with high-pass filtering, wavelet and curvelet methods.

  12. Anatomy of an error: a bidirectional state model of task engagement/disengagement and attention-related errors.

    PubMed

    Allan Cheyne, J; Solman, Grayden J F; Carriere, Jonathan S A; Smilek, Daniel

    2009-04-01

    We present arguments and evidence for a three-state attentional model of task engagement/disengagement. The model postulates three states of mind-wandering: occurrent task inattention, generic task inattention, and response disengagement. We hypothesize that all three states are both causes and consequences of task performance outcomes and apply across a variety of experimental and real-world tasks. We apply this model to the analysis of a widely used GO/NOGO task, the Sustained Attention to Response Task (SART). We identify three performance characteristics of the SART that map onto the three states of the model: RT variability, anticipations, and omissions. Predictions based on the model are tested, and largely corroborated, via regression and lag-sequential analyses of both successful and unsuccessful withholding on NOGO trials as well as self-reported mind-wandering and everyday cognitive errors. The results revealed theoretically consistent temporal associations among the state indicators and between these and SART errors as well as with self-report measures. Lag analysis was consistent with the hypotheses that temporal transitions among states are often extremely abrupt and that the association between mind-wandering and performance is bidirectional. The bidirectional effects suggest that errors constitute important occasions for reactive mind-wandering. The model also enables concrete phenomenological, behavioral, and physiological predictions for future research.

  13. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  14. Prediction of human errors by maladaptive changes in event-related brain networks.

    PubMed

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus

    2008-04-22

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.

  15. Prediction of human errors by maladaptive changes in event-related brain networks

    PubMed Central

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123

  16. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  17. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  18. Dopamine reward prediction-error signalling: a two-component response

    PubMed Central

    Schultz, Wolfram

    2017-01-01

    Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020

  19. Simultaneous Multislice Echo Planar Imaging With Blipped Controlled Aliasing in Parallel Imaging Results in Higher Acceleration: A Promising Technique for Accelerated Diffusion Tensor Imaging of Skeletal Muscle.

    PubMed

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-07-01

    The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic resonance scanner using a 15-channel knee coil. The EPI was performed at a b value of 500 s/mm2 without slice acceleration (conventional DTI) as well as with 2-fold and 3-fold acceleration. Fractional anisotropy (FA) and mean diffusivity (MD) were measured in all 3 acquisitions. Fiber tracking performance was compared between the acquisitions regarding the number of tracks, average track length, and anatomical precision using multivariate analysis of variance and Mann-Whitney U tests. Acquisition time was 7:24 minutes for conventional DTI, 3:53 minutes for 2-fold acceleration, and 2:38 minutes for 3-fold acceleration. Overall FA and MD values ranged from 0.220 to 0.378 and 1.595 to 1.829 mm2/s, respectively. Two-fold acceleration yielded similar FA and MD values (P ≥ 0.901) and similar fiber tracking performance compared with conventional DTI. Three-fold acceleration resulted in comparable MD (P = 0.199) but higher FA values (P = 0.006) and significantly impaired fiber tracking in the soleus and tibialis anterior muscles (number of tracks, P < 0.001; anatomical precision, P ≤ 0.005). Simultaneous multislice EPI with blipped controlled aliasing in parallel imaging results in higher acceleration can remarkably reduce acquisition time in DTI of skeletal muscle with similar image quality and quantification accuracy of diffusion parameters. This may increase the clinical applicability of muscle anisotropy measurements.

  20. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  1. Precision Closed-Loop Orbital Maneuvering System Design and Performance for the Magnetospheric Multi-Scale Mission (MMS) Formation

    NASA Technical Reports Server (NTRS)

    Chai, Dean; Queen, Steve; Placanica, Sam

    2015-01-01

    NASA's Magnetospheric Multi-Scale (MMS) mission successfully launched on March 13, 2015 (UTC) consists of four identically instrumented spin-stabilized observatories that function as a constellation to study magnetic reconnection in space. The need to maintain sufficiently accurate spatial and temporal formation resolution of the observatories must be balanced against the logistical constraints of executing overly-frequent maneuvers on a small fleet of spacecraft. These two considerations make for an extremely challenging maneuver design problem. This paper focuses on the design elements of a 6-DOF spacecraft attitude control and maneuvering system capable of delivering the high-precision adjustments required by the constellation designers---specifically, the design, implementation, and on-orbit performance of the closed-loop formation-class maneuvers that include initialization, maintenance, and re-sizing. The maneuvering control system flown on MMS utilizes a micro-gravity resolution accelerometer sampled at a high rate in order to achieve closed-loop velocity tracking of an inertial target with arc-minute directional and millimeter-per-second magnitude accuracy. This paper summarizes the techniques used for correcting bias drift, sensor-head offsets, and centripetal aliasing in the acceleration measurements. It also discusses the on-board pre-maneuver calibration and compensation algorithms as well as the implementation of the post-maneuver attitude adjustments.

  2. Precision Closed-Loop Orbital Maneuvering System Design and Performance for the Magnetospheric Multiscale Formation

    NASA Technical Reports Server (NTRS)

    Chai, Dean J.; Queen, Steven Z.; Placanica, Samuel J.

    2015-01-01

    NASAs Magnetospheric Multiscale (MMS) mission successfully launched on March 13,2015 (UTC) consists of four identically instrumented spin-stabilized observatories that function as a constellation to study magnetic reconnection in space. The need to maintain sufficiently accurate spatial and temporal formation resolution of the observatories must be balanced against the logistical constraints of executing overly-frequent maneuvers on a small fleet of spacecraft. These two considerations make for an extremely challenging maneuver design problem. This paper focuses on the design elements of a 6-DOF spacecraft attitude control and maneuvering system capable of delivering the high-precision adjustments required by the constellation designers specifically, the design, implementation, and on-orbit performance of the closed-loop formation-class maneuvers that include initialization, maintenance, and re-sizing. The maneuvering control system flown on MMS utilizes a micro-gravity resolution accelerometer sampled at a high rate in order to achieve closed-loop velocity tracking of an inertial target with arc-minute directional and millimeter-per second magnitude accuracy. This paper summarizes the techniques used for correcting bias drift, sensor-head offsets, and centripetal aliasing in the acceleration measurements. It also discusses the on-board pre-maneuver calibration and compensation algorithms as well as the implementation of the post-maneuver attitude adjustments.

  3. Complex-Difference Constrained Compressed Sensing Reconstruction for Accelerated PRF Thermometry with Application to MRI Induced RF Heating

    PubMed Central

    Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.

    2014-01-01

    Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099

  4. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  5. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  6. The cerebellum predicts the temporal consequences of observed motor acts.

    PubMed

    Avanzino, Laura; Bove, Marco; Pelosin, Elisa; Ogliastro, Carla; Lagravinese, Giovanna; Martino, Davide

    2015-01-01

    It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. The ability to extract temporal patterns and regularity of events is referred as temporal expectation. Temporal expectation activates the same cerebral network usually engaged in action selection, comprising cerebellum. However, it is unclear whether the cerebellum is directly involved in temporal expectation, when timing information is processed to make predictions on the outcome of a motor act. Healthy volunteers received one session of either active (inhibitory, 1 Hz) or sham repetitive transcranial magnetic stimulation covering the right lateral cerebellum prior the execution of a temporal expectation task. Subjects were asked to predict the end of a visually perceived human body motion (right hand handwriting) and of an inanimate object motion (a moving circle reaching a target). Videos representing movements were shown in full; the actual tasks consisted of watching the same videos, but interrupted after a variable interval from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the spacebar of the keyboard. Performance on the timing task was analyzed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. The active group exhibited greater absolute timing error compared with the sham group only in the human body motion task. Our findings suggest that the cerebellum is engaged in cognitive and perceptual domains that are strictly connected to motor control.

  7. In vitro evaluation of the imaging accuracy of C-arm conebeam CT in cerebral perfusion imaging

    PubMed Central

    Ganguly, A.; Fieselmann, A.; Boese, J.; Rohkohl, C.; Hornegger, J.; Fahrig, R.

    2012-01-01

    Purpose: The authors have developed a method to enable cerebral perfusion CT imaging using C-arm based conebeam CT (CBCT). This allows intraprocedural monitoring of brain perfusion during treatment of stroke. Briefly, the technique consists of acquiring multiple scans (each scan comprised of six sweeps) acquired at different time delays with respect to the start of the x-ray contrast agent injection. The projections are then reconstructed into angular blocks and interpolated at desired time points. The authors have previously demonstrated its feasibility in vivo using an animal model. In this paper, the authors describe an in vitro technique to evaluate the accuracy of their method for measuring the relevant temporal signals. Methods: The authors’ evaluation method is based on the concept that any temporal signal can be represented by a Fourier series of weighted sinusoids. A sinusoidal phantom was developed by varying the concentration of iodine as successive steps of a sine wave. Each step corresponding to a different dilution of iodine contrast solution contained in partitions along a cylinder. By translating the phantom along the axis at different velocities, sinusoidal signals at different frequencies were generated. Using their image acquisition and reconstruction algorithm, these sinusoidal signals were imaged with a C-arm system and the 3D volumes were reconstructed. The average value in a slice was plotted as a function of time. The phantom was also imaged using a clinical CT system with 0.5 s rotation. C-arm CBCT results using 6, 3, 2, and 1 scan sequences were compared to those obtained using CT. Data were compared for linear velocities of the phantom ranging from 0.6 to 1 cm/s. This covers the temporal frequencies up to 0.16 Hz corresponding to a frequency range within which 99% of the spectral energy for all temporal signals in cerebral perfusion imaging is contained. Results: The errors in measurement of temporal frequencies are mostly below 2% for all multiscan sequences. For single scan sequences, the errors increase sharply beyond 0.10 Hz. The amplitude errors increase with frequency and with decrease in the number of scans used. Conclusions: Our multiscan perfusion CT approach allows low errors in signal frequency measurement. Increasing the number of scans reduces the amplitude errors. A two-scan sequence appears to offer the best compromise between accuracy and the associated total x-ray and iodine dose. PMID:23127059

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  9. Transfer effects of manipulating temporal constraints on learning a two-choice reaction time task with low stimulus-response compatibility.

    PubMed

    Chen, David D; Pei, Laura; Chan, John S Y; Yan, Jin H

    2012-10-01

    Recent research using deliberate amplification of spatial errors to increase motor learning leads to the question of whether amplifying temporal errors may also facilitate learning. We investigated transfer effects caused by manipulating temporal constraints on learning a two-choice reaction time (CRT) task with varying degrees of stimulus-response compatibility. Thirty-four participants were randomly assigned to one of the three groups and completed 120 trials during acquisition. For every fourth trial, one group was instructed to decrease CRT by 50 msec. relative to the previous trial and a second group was instructed to increase CRT by 50 msec. The third group (the control) was told not to change their responses. After a 5-min. break, participants completed a 40-trial no-feedback transfer test. A 40-trial delayed transfer test was administered 24 hours later. During acquisition, the Decreased Reaction Time group responded faster than the two other groups, but this group also made more errors than the other two groups. In the 5-min. delayed test (immediate transfer), the Decreased Reaction Time group had faster reaction times than the other two groups, while for the 24-hr. delayed test (delayed transfer), both the Decreased Reaction Time group and Increased Reaction Time group had significantly faster reaction times than the control. For delayed transfer, both Decreased and Increased Reaction Time groups reacted significantly faster than the control group. Analyses of error scores in the transfer tests indicated revealed no significant group differences. Results were discussed with regard to the notion of practice variability and goal-setting benefits.

  10. Exploring the Retrieval Dynamics of Delayed and Final Free Recall: Further Evidence for Temporal-Contextual Search

    ERIC Educational Resources Information Center

    Unsworth, Nash

    2008-01-01

    Retrieval dynamics in free recall were explored based on a two-stage search model that relies on temporal-contextual cues. Participants were tested on both delayed and final free recall and correct recalls, errors, and latency measures were examined. In delayed free recall, participants began recall with the first word presented and tended to…

  11. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  12. Unreliability and error in the military's "gold standard" measure of sexual harassment by education and gender.

    PubMed

    Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S

    2011-01-01

    The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.

  13. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  14. Spatial-temporal-covariance-based modeling, analysis, and simulation of aero-optics wavefront aberrations.

    PubMed

    Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J

    2014-07-01

    We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.

  15. Updating of Aversive Memories after Temporal Error Detection Is Differentially Modulated by mTOR across Development

    ERIC Educational Resources Information Center

    Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie

    2017-01-01

    The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats,…

  16. Interactions between Brief Flashed Lines at Threshold.

    DTIC Science & Technology

    1987-12-11

    ORAIAIN 6 OFC ’PO 4 4M FMNTRIGOGNZTO lol in AFI, C 203 2- 44 . NAME OF PFN IG PORAIION lbOFFICE SYMBOL 7il PRO4MEN MINTRUNT INCNIATON NM ,.. .oAFOSR...Cass, P. C. (1986) Facilitatory interactionE between flashed lines. Perceptinn. jj,443-460. omith, P.A. and Cass, P C. (1967) Aliasing in the

  17. Abandoned Uranium Mine (AUM) Surface Areas, Navajo Nation, 2016, US EPA Region 9

    EPA Pesticide Factsheets

    This GIS dataset contains polygon features that represent all Abandoned Uranium Mines (AUMs) on or within one mile of the Navajo Nation. Attributes include mine names, aliases, Potentially Responsible Parties, reclaimation status, EPA mine status, links to AUM reports, and the region in which an AUM is located. This dataset contains 608 features.

  18. 77 FR 44307 - In the Matter of the Review of the Designation of the Islamic Resistance Movement (Hamas and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-27

    ... DEPARTMENT OF STATE [Public Notice 7965] In the Matter of the Review of the Designation of the Islamic Resistance Movement (Hamas and Other Aliases) As a Foreign Terrorist Organization pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative...

  19. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  20. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  1. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  2. 32 CFR Appendix A to Part 270 - Application for Compensation of Vietnamese Commandos

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operative is the basis for applying for payment: (1) Current legal name or legal name at death: (a) Aliases: (b) Former, or other legal names used: (2) Current address or last address prior to death: (3... 1958 through 1975. I declare under penalty of perjury under the laws of the United States of America...

  3. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  4. High-resolution observations of the globular cluster NGC 7099

    NASA Astrophysics Data System (ADS)

    Sams, Bruce Jones, III

    The globular cluster NGC 7099 is a prototypical collapsed core cluster. Through a series of instrumental, observational, and theoretical observations, I have resolved its core structure using a ground based telescope. The core has a radius of 2.15 arcsec when imaged with a V band spatial resolution of 0.35 arcsec. Initial attempts at speckle imaging produced images of inadequate signal to noise and resolution. To explain these results, a new, fully general signal-to-noise model has been developed. It properly accounts for all sources of noise in a speckle observation, including aliasing of high spatial frequencies by inadequate sampling of the image plane. The model, called Full Speckle Noise (FSN), can be used to predict the outcome of any speckle imaging experiment. A new high resolution imaging technique called ACT (Atmospheric Correlation with a Template) was developed to create sharper astronomical images. ACT compensates for image motion due to atmospheric turbulence. ACT is similar to the Shift and Add algorithm, but uses apriori spatial knowledge about the image to further constrain the shifts. In this instance, the final images of NGC 7099 have resolutions of 0.35 arcsec from data taken in 1 arcsec seeing. The PAPA (Precision Analog Photon Address) camera was used to record data. It is subject to errors when imaging cluster cores in a large field of view. The origin of these errors is explained, and several ways to avoid them proposed. New software was created for the PAPA camera to properly take flat field images taken in a large field of view. Absolute photometry measurements of NGC 7099 made with the PAPA camera are accurate to 0.1 magnitude. Luminosity sampling errors dominate surface brightness profiles of the central few arcsec in a collapsed core cluster. These errors set limits on the ultimate spatial accuracy of surface brightness profiles.

  5. Corrections of stratified tropospheric delays in SAR interferometry: Validation with global atmospheric models

    NASA Astrophysics Data System (ADS)

    Doin, Marie-Pierre; Lasserre, Cécile; Peltzer, Gilles; Cavalié, Olivier; Doubre, Cécile

    2010-05-01

    The main limiting factor on the accuracy of Interferometric SAR measurements (InSAR) comes from phase propagation delays through the troposphere. The delay can be divided into a stratified component, which correlates with the topography and often dominates the tropospheric signal, and a turbulent component. We use Global Atmospheric Models (GAM) to estimate the stratified phase delay and delay-elevation ratio at epochs of SAR acquisitions, and compare them to observed phase delay derived from SAR interferograms. Three test areas are selected with different geographic and climatic environments and with large SAR archive available. The Lake Mead, Nevada, USA is covered by 79 ERS1/2 and ENVISAT acquisitions, the Haiyuan Fault area, Gansu, China, by 24 ERS1/2 acquisitions, and the Afar region, Republic of Djibouti, by 91 Radarsat acquisitions. The hydrostatic and wet stratified delays are computed from GAM as a function of atmospheric pressure P, temperature T, and water vapor partial pressure e vertical profiles. The hydrostatic delay, which depends on ratio P/T, varies significantly at low elevation and cannot be neglected. The wet component of the delay depends mostly on the near surface specific humidity. GAM predicted delay-elevation ratios are in good agreement with the ratios derived from InSAR data away from deforming zones. Both estimations of the delay-elevation ratio can thus be used to perform a first order correction of the observed interferometric phase to retrieve a ground motion signal of low amplitude. We also demonstrate that aliasing of daily and seasonal variations in the stratified delay due to uneven sampling of SAR data significantly bias InSAR data stacks or time series produced after temporal smoothing. In all three test cases, the InSAR data stacks or smoothed time series present a residual stratified delay of the order of the expected deformation signal. In all cases, correcting interferograms from the stratified delay removes all these biases. We quantify the standard error associated with the correction of the stratified atmospheric delay. It varies from one site to another depending on the prevailing atmospheric conditions, but remains bounded by the standard deviation of the daily fluctuations of the stratified delay around the seasonal average. Finally we suggest that the phase delay correction can potentially be improved by introducing a non-linear dependence to the elevation derived from GAM.

  6. Corrections of stratified tropospheric delays in SAR interferometry: Validation with global atmospheric models

    NASA Astrophysics Data System (ADS)

    Doin, M.-P.; Lasserre, C.; Peltzer, G.; Cavalié, O.; Doubre, C.

    2009-09-01

    The main limiting factor on the accuracy of Interferometric SAR measurements (InSAR) comes from phase propagation delays through the troposphere. The delay can be divided into a stratified component, which correlates with the topography and often dominates the tropospheric signal, and a turbulent component. We use Global Atmospheric Models (GAM) to estimate the stratified phase delay and delay-elevation ratio at epochs of SAR acquisitions, and compare them to observed phase delay derived from SAR interferograms. Three test areas are selected with different geographic and climatic environments and with large SAR archive available. The Lake Mead, Nevada, USA is covered by 79 ERS1/2 and ENVISAT acquisitions, the Haiyuan Fault area, Gansu, China, by 24 ERS1/2 acquisitions, and the Afar region, Republic of Djibouti, by 91 Radarsat acquisitions. The hydrostatic and wet stratified delays are computed from GAM as a function of atmospheric pressure P, temperature T, and water vapor partial pressure e vertical profiles. The hydrostatic delay, which depends on ratio P/ T, varies significantly at low elevation and cannot be neglected. The wet component of the delay depends mostly on the near surface specific humidity. GAM predicted delay-elevation ratios are in good agreement with the ratios derived from InSAR data away from deforming zones. Both estimations of the delay-elevation ratio can thus be used to perform a first order correction of the observed interferometric phase to retrieve a ground motion signal of low amplitude. We also demonstrate that aliasing of daily and seasonal variations in the stratified delay due to uneven sampling of SAR data significantly bias InSAR data stacks or time series produced after temporal smoothing. In all three test cases, the InSAR data stacks or smoothed time series present a residual stratified delay of the order of the expected deformation signal. In all cases, correcting interferograms from the stratified delay removes all these biases. We quantify the standard error associated with the correction of the stratified atmospheric delay. It varies from one site to another depending on the prevailing atmospheric conditions, but remains bounded by the standard deviation of the daily fluctuations of the stratified delay around the seasonal average. Finally we suggest that the phase delay correction can potentially be improved by introducing a non-linear dependence to the elevation derived from GAM.

  7. A Multi-Satellite GRACE-like Mission Using Small Satellites

    NASA Astrophysics Data System (ADS)

    Stephens, M.; Bender, P. L.; Nerem, R.; Pierce, R.; Wiese, D. N.

    2010-12-01

    Measurement of global water variation provides information critical to climate change and water resource monitoring. The Gravity Recovery and Climate Experiment II (GRACE II) was chosen as a Tier III mission by National Research Council's decadal survey because of its unique ability to measure the global mass distributions and variations in the mass distribution caused primarily by water variation. We discuss a multi-satellite approach to a GRACE-like mission. Enhanced spatial resolution of mass variations over those provided by the current GRACE mission can be achieved by improving the ranging accuracy; an interferometric ranging concept that improves the ranging accuracy has been demonstrated[1]. However, recent calculations show that to obtain the full science improvement using interferometric ranging, temporal aliasing errors due to modeling and to undersampling of geophysical signals must be mitigated[2]. One approach is to improve the data analysis techniques and validation processes. Another approach is to fly two or more pairs of satellites, thereby sampling the Earth's gravitational field at shorter time intervals[3]. A multiple-pair mission is often dismissed as too expensive, but the mission costs of a multiple-pair GRACE-like mission could be greatly reduced by developing compact ranging systems so that the mass, power, and volume usage is consistent with small spacecraft buses. Such size reduction drastically reduces the launch costs by allowing the spacecraft to be launched as auxiliary payloads. We will discuss the technological challenges that are associated with a GRACE-like mission that uses smallsats to reduce costs of more than one pair of satellites, as well as the scientific benefits of the two or more satellite pairs. The technological challenges include reducing the size of the payload and developing a low-drag, low-pointing jitter spacecraft. [1]Pierce, R., J. Leitch, M. Stephens, P. Bender, and R. Nerem, “Intersatellite range monitoring using optical interferometry”, Appl. Opt. 47 (2008), 5007. [2]P. Visser and E. Pavlis in ”Report from the Workshop on The Future of Satellite Gravimetry”, edited by R. Koop and R. Rummel (ESTEC, Noordwijk, The Netherlands, 12-13 April, 2007), pg. 11. [3]Bender, P. L., D. N. Wiese, and R. S. Nerem, “A possible dual-GRACE mission with 90 degree and 63 degree inclination orbits, Proceedings of the 3rd International Symposium on Formation Flying, Missions and Technologies”, ESA Communication Production Office, ESA-SP-654, 2008.

  8. Source memory errors in schizophrenia, hallucinations and negative symptoms: a synthesis of research findings.

    PubMed

    Brébion, G; Ohlsen, R I; Bressan, R A; David, A S

    2012-12-01

    Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.

  9. The frontal-anatomic specificity of design fluency repetitions and their diagnostic relevance for behavioral variant frontotemporal dementia.

    PubMed

    Possin, Katherine L; Chester, Serana K; Laluz, Victor; Bostrom, Alan; Rosen, Howard J; Miller, Bruce L; Kramer, Joel H

    2012-09-01

    On tests of design fluency, an examinee draws as many different designs as possible in a specified time limit while avoiding repetition. The neuroanatomical substrates and diagnostic group differences of design fluency repetition errors and total correct scores were examined in 110 individuals diagnosed with dementia, 53 with mild cognitive impairment (MCI), and 37 neurologically healthy controls. The errors correlated significantly with volumes in the right and left orbitofrontal cortex (OFC), the right and left superior frontal gyrus, the right inferior frontal gyrus, and the right striatum, but did not correlate with volumes in any parietal or temporal lobe regions. Regression analyses indicated that the lateral OFC may be particularly crucial for preventing these errors, even after excluding patients with behavioral variant frontotemporal dementia (bvFTD) from the analysis. Total correct correlated more diffusely with volumes in the right and left frontal and parietal cortex, the right temporal cortex, and the right striatum and thalamus. Patients diagnosed with bvFTD made significantly more repetition errors than patients diagnosed with MCI, Alzheimer's disease, semantic dementia, progressive supranuclear palsy, or corticobasal syndrome. In contrast, total correct design scores did not differentiate the dementia patients. These results highlight the frontal-anatomic specificity of design fluency repetitions. In addition, the results indicate that the propensity to make these errors supports the diagnosis of bvFTD. (JINS, 2012, 18, 1-11).

  10. Phase stabilization of multidimensional amplification architectures for ultrashort pulses

    NASA Astrophysics Data System (ADS)

    Müller, M.; Kienel, M.; Klenke, A.; Eidam, T.; Limpert, J.; Tünnermann, A.

    2015-03-01

    The active phase stabilization of spatially and temporally combined ultrashort pulses is investigated theoretically and experimentally. Particularly, considering a combining scheme applying 2 amplifier channels and 4 divided-pulse replicas a bistable behavior is observed. The reason is mutual influence of the optical error signals that is intrinsic to temporal polarization beam combining. A successful mitigation strategy is proposed and is analyzed theoretically and experimentally.

  11. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    PubMed Central

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism. PMID:27532262

  12. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    PubMed

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  13. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  14. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  15. Acquiring Research-grade ALSM Data in the Commercial Marketplace

    NASA Astrophysics Data System (ADS)

    Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.

    2003-12-01

    The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.

  16. Void Growth and Coalescence Simulations

    DTIC Science & Technology

    2013-08-01

    distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, X; Li, Z; Zheng, D

    Purpose: In the context of evaluating dosimetric impacts of a variety of uncertainties involved in HDR Tandem-and-Ovoid treatment, to study the correlations between conventional point doses and 3D volumetric doses. Methods: For 5 cervical cancer patients treated with HDR T&O, 150 plans were retrospectively created to study dosimetric impacts of the following uncertainties: (1) inter-fractional applicator displacement between two treatment fractions within a single insertion by applying Fraction#1 plan to Fraction#2 CT; (2) positional dwell error simulated from −5mm to 5mm in 1mm steps; (3) simulated temporal dwell error of 0.05s, 0.1s, 0.5s, and 1s. The original plans were basedmore » on point dose prescription, from which the volume covered by the prescription dose was generated as the pseudo target volume to study the 3D target dose effect. OARs were contoured. The point and volumetric dose errors were calculated by taking the differences between original and simulated plans. The correlations between the point and volumetric dose errors were analyzed. Results: For the most clinically relevant positional dwell uncertainty of 1mm, temporal uncertainty of 0.05s, and inter-fractional applicator displacement within the same insertion, the mean target D90 and V100 deviation were within 1%. Among these uncertainties, the applicator displacement showed the largest potential target coverage impact (2.6% on D90) as well as the OAR dose impact (2.5% and 3.4% on bladder D2cc and rectum D2cc). The Spearman correlation analysis shows a correlation coefficient of 0.43 with a p-value of 0.11 between target D90 coverage and H point dose. Conclusion: With the most clinically relevant positional and temporal dwell uncertainties and patient interfractional applicator displacement within the same insertion, the dose error is within clinical acceptable range. The lack of correlation between H point and 3D volumetric dose errors is a motivator for the use of 3D treatment planning in cervical HDR brachytherapy.« less

  18. Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data

    PubMed Central

    George, Brandon; Aban, Inmaculada

    2014-01-01

    Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361

  19. Decomposition of Sources of Errors in Seasonal Streamflow Forecasting over the U.S. Sunbelt

    NASA Technical Reports Server (NTRS)

    Mazrooei, Amirhossein; Sinah, Tusshar; Sankarasubramanian, A.; Kumar, Sujay V.; Peters-Lidard, Christa D.

    2015-01-01

    Seasonal streamflow forecasts, contingent on climate information, can be utilized to ensure water supply for multiple uses including municipal demands, hydroelectric power generation, and for planning agricultural operations. However, uncertainties in the streamflow forecasts pose significant challenges in their utilization in real-time operations. In this study, we systematically decompose various sources of errors in developing seasonal streamflow forecasts from two Land Surface Models (LSMs) (Noah3.2 and CLM2), which are forced with downscaled and disaggregated climate forecasts. In particular, the study quantifies the relative contributions of the sources of errors from LSMs, climate forecasts, and downscaling/disaggregation techniques in developing seasonal streamflow forecast. For this purpose, three month ahead seasonal precipitation forecasts from the ECHAM4.5 general circulation model (GCM) were statistically downscaled from 2.8deg to 1/8deg spatial resolution using principal component regression (PCR) and then temporally disaggregated from monthly to daily time step using kernel-nearest neighbor (K-NN) approach. For other climatic forcings, excluding precipitation, we considered the North American Land Data Assimilation System version 2 (NLDAS-2) hourly climatology over the years 1979 to 2010. Then the selected LSMs were forced with precipitation forecasts and NLDAS-2 hourly climatology to develop retrospective seasonal streamflow forecasts over a period of 20 years (1991-2010). Finally, the performance of LSMs in forecasting streamflow under different schemes was analyzed to quantify the relative contribution of various sources of errors in developing seasonal streamflow forecast. Our results indicate that the most dominant source of errors during winter and fall seasons is the errors due to ECHAM4.5 precipitation forecasts, while temporal disaggregation scheme contributes to maximum errors during summer season.

  20. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  2. Wide-Angle Multistatic Synthetic Aperture Radar: Focused Image Formation and Aliasing Artifact Mitigation

    DTIC Science & Technology

    2005-07-01

    Progress in Applied Computational Electro- magnetics. ACES, Syracuse, NY, 2004. 91. Mahafza, Bassem R. Radar Systems Analysis and Design Using MATLAB...Figure Page 4.5. RCS chamber coordinate system . . . . . . . . . . . . . . . . . 88 4.6. AFIT’s RCS Chamber...4.11. Frequency domain schematic of RCS data collection . . . . . . 98 4.12. Spherical coordinate system for RCS data calibration . . . . . . 102 4.13

  3. 75 FR 74127 - In the Matter of the Review of the Designation of Islamic Movement of Uzbekistan (IMU and Other...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ... DEPARTMENT OF STATE [Public Notice: 7250] In the Matter of the Review of the Designation of Islamic Movement of Uzbekistan (IMU and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative...

  4. An Imperfect Dopaminergic Error Signal Can Drive Temporal-Difference Learning

    PubMed Central

    Potjans, Wiebke; Diesmann, Markus; Morrison, Abigail

    2011-01-01

    An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards. PMID:21589888

  5. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  6. Spatial acoustic signal processing for immersive communication

    NASA Astrophysics Data System (ADS)

    Atkins, Joshua

    Computing is rapidly becoming ubiquitous as users expect devices that can augment and interact naturally with the world around them. In these systems it is necessary to have an acoustic front-end that is able to capture and reproduce natural human communication. Whether the end point is a speech recognizer or another human listener, the reduction of noise, reverberation, and acoustic echoes are all necessary and complex challenges. The focus of this dissertation is to provide a general method for approaching these problems using spherical microphone and loudspeaker arrays.. In this work, a theory of capturing and reproducing three-dimensional acoustic fields is introduced from a signal processing perspective. In particular, the decomposition of the spatial part of the acoustic field into an orthogonal basis of spherical harmonics provides not only a general framework for analysis, but also many processing advantages. The spatial sampling error limits the upper frequency range with which a sound field can be accurately captured or reproduced. In broadband arrays, the cost and complexity of using multiple transducers is an issue. This work provides a flexible optimization method for determining the location of array elements to minimize the spatial aliasing error. The low frequency array processing ability is also limited by the SNR, mismatch, and placement error of transducers. To address this, a robust processing method is introduced and used to design a reproduction system for rendering over arbitrary loudspeaker arrays or binaurally over headphones. In addition to the beamforming problem, the multichannel acoustic echo cancellation (MCAEC) issue is also addressed. A MCAEC must adaptively estimate and track the constantly changing loudspeaker-room-microphone response to remove the sound field presented over the loudspeakers from that captured by the microphones. In the multichannel case, the system is overdetermined and many adaptive schemes fail to converge to the true impulse response. This forces the need to track both the near and far end room responses. A transform domain method that mitigates this problem is derived and implemented. Results with a real system using a 16-channel loudspeaker array and 32-channel microphone array are presented.

  7. On removing interpolation and resampling artifacts in rigid image registration.

    PubMed

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  8. On Removing Interpolation and Resampling Artifacts in Rigid Image Registration

    PubMed Central

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce

    2013-01-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044

  9. Power cepstrum technique with application to model helicopter acoustic data

    NASA Technical Reports Server (NTRS)

    Martin, R. M.; Burley, C. L.

    1986-01-01

    The application of the power cepstrum to measured helicopter-rotor acoustic data is investigated. A previously applied correction to the reconstructed spectrum is shown to be incorrect. For an exact echoed signal, the amplitude of the cepstrum echo spike at the delay time is linearly related to the echo relative amplitude in the time domain. If the measured spectrum is not entirely from the source signal, the cepstrum will not yield the desired echo characteristics and a cepstral aliasing may occur because of the effective sample rate in the frequency domain. The spectral analysis bandwidth must be less than one-half the echo ripple frequency or cepstral aliasing can occur. The power cepstrum editing technique is a useful tool for removing some of the contamination because of acoustic reflections from measured rotor acoustic spectra. The cepstrum editing yields an improved estimate of the free field spectrum, but the correction process is limited by the lack of accurate knowledge of the echo transfer function. An alternate procedure, which does not require cepstral editing, is proposed which allows the complete correction of a contaminated spectrum through use of both the transfer function and delay time of the echo process.

  10. Undersampled digital holographic interferometry

    NASA Astrophysics Data System (ADS)

    Halaq, H.; Demoli, N.; Sović, I.; Šariri, K.; Torzynski, M.; Vukičević, D.

    2008-04-01

    In digital holography, primary holographic fringes are recorded using a matricial CCD sensor. Because of the low spatial resolution of currently available CCD arrays, the angle between the reference and object beams must be limited to a few degrees. Namely, due to the digitization involved, the Shannon's criterion imposes that the Nyquist sampling frequency be at least twice the highest signal frequency. This means that, in the case of the recording of an interference fringe pattern by a CCD sensor, the inter-fringe distance must be larger than twice the pixel period. This in turn limits the angle between the object and the reference beams. If this angle, in a practical holographic interferometry measuring setup, cannot be limited to the required value, aliasing will occur in the reconstructed image. In this work, we demonstrate that the low spatial frequency metrology data could nevertheless be efficiently extracted by careful choice of twofold, and even threefold, undersampling of the object field. By combining the time-averaged recording with subtraction digital holography method, we present results for a loudspeaker membrane interferometric study obtained under strong aliasing conditions. High-contrast fringes, as a consequence of the vibration modes of the membrane, are obtained.

  11. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Multi-temporal AirSWOT elevations on the Willamette river: error characterization and algorithm testing

    NASA Astrophysics Data System (ADS)

    Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.

    2017-12-01

    We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).

  13. Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection

    NASA Astrophysics Data System (ADS)

    Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.

    2014-02-01

    Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.

  14. Knowledge-rich temporal relation identification and classification in clinical notes

    PubMed Central

    D’Souza, Jennifer; Ng, Vincent

    2014-01-01

    Motivation: We examine the task of temporal relation classification for the clinical domain. Our approach to this task departs from existing ones in that it is (i) ‘knowledge-rich’, employing sophisticated knowledge derived from discourse relations as well as both domain-independent and domain-dependent semantic relations, and (ii) ‘hybrid’, combining the strengths of rule-based and learning-based approaches. Evaluation results on the i2b2 Clinical Temporal Relations Challenge corpus show that our approach yields a 17–24% and 8–14% relative reduction in error over a state-of-the-art learning-based baseline system when gold-standard and automatically identified temporal relations are used, respectively. Database URL: http://www.hlt.utdallas.edu/~jld082000/temporal-relations/ PMID:25414383

  15. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  16. Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.

    NASA Astrophysics Data System (ADS)

    Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.

    2004-11-01

    The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.


  17. Event-related potentials reflect impaired temporal interval learning following haloperidol administration.

    PubMed

    Forster, Sarah E; Zirnheld, Patrick; Shekhar, Anantha; Steinhauer, Stuart R; O'Donnell, Brian F; Hetrick, William P

    2017-09-01

    Signals carried by the mesencephalic dopamine system and conveyed to anterior cingulate cortex are critically implicated in probabilistic reward learning and performance monitoring. A common evaluative mechanism purportedly subserves both functions, giving rise to homologous medial frontal negativities in feedback- and response-locked event-related brain potentials (the feedback-related negativity (FRN) and the error-related negativity (ERN), respectively), reflecting dopamine-dependent prediction error signals to unexpectedly negative events. Consistent with this model, the dopamine receptor antagonist, haloperidol, attenuates the ERN, but effects on FRN have not yet been evaluated. ERN and FRN were recorded during a temporal interval learning task (TILT) following randomized, double-blind administration of haloperidol (3 mg; n = 18), diphenhydramine (an active control for haloperidol; 25 mg; n = 20), or placebo (n = 21) to healthy controls. Centroparietal positivities, the Pe and feedback-locked P300, were also measured and correlations between ERP measures and behavioral indices of learning, overall accuracy, and post-error compensatory behavior were evaluated. We hypothesized that haloperidol would reduce ERN and FRN, but that ERN would uniquely track automatic, error-related performance adjustments, while FRN would be associated with learning and overall accuracy. As predicted, ERN was reduced by haloperidol and in those exhibiting less adaptive post-error performance; however, these effects were limited to ERNs following fast timing errors. In contrast, the FRN was not affected by drug condition, although increased FRN amplitude was associated with improved accuracy. Significant drug effects on centroparietal positivities were also absent. Our results support a functional and neurobiological dissociation between the ERN and FRN.

  18. Detection of long duration cloud contamination in hyper-temporal NDVI imagery

    NASA Astrophysics Data System (ADS)

    Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.

    2012-04-01

    NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.

  19. Restoration of Central Programmed Movement Pattern by Temporal Electrical Stimulation-Assisted Training in Patients with Spinal Cerebellar Atrophy.

    PubMed

    Huang, Ying-Zu; Chang, Yao-Shun; Hsu, Miao-Ju; Wong, Alice M K; Chang, Ya-Ju

    2015-01-01

    Disrupted triphasic electromyography (EMG) patterns of agonist and antagonist muscle pairs during fast goal-directed movements have been found in patients with hypermetria. Since peripheral electrical stimulation (ES) and motor training may modulate motor cortical excitability through plasticity mechanisms, we aimed to investigate whether temporal ES-assisted movement training could influence premovement cortical excitability and alleviate hypermetria in patients with spinal cerebellar ataxia (SCA). The EMG of the agonist extensor carpi radialis muscle and antagonist flexor carpi radialis muscle, premovement motor evoked potentials (MEPs) of the flexor carpi radialis muscle, and the constant and variable errors of movements were assessed before and after 4 weeks of ES-assisted fast goal-directed wrist extension training in the training group and of general health education in the control group. After training, the premovement MEPs of the antagonist muscle were facilitated at 50 ms before the onset of movement. In addition, the EMG onset latency of the antagonist muscle shifted earlier and the constant error decreased significantly. In summary, temporal ES-assisted training alleviated hypermetria by restoring antagonist premovement and temporal triphasic EMG patterns in SCA patients. This technique may be applied to treat hypermetria in cerebellar disorders. (This trial is registered with NCT01983670.).

  20. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results (Part I): Earths Radiation Budget

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.

  1. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.

  2. Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation

    DTIC Science & Technology

    2016-08-12

    error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we

  3. Differential processing of melodic, rhythmic and simple tone deviations in musicians--an MEG study.

    PubMed

    Lappe, Claudia; Lappe, Markus; Pantev, Christo

    2016-01-01

    Rhythm and melody are two basic characteristics of music. Performing musicians have to pay attention to both, and avoid errors in either aspect of their performance. To investigate the neural processes involved in detecting melodic and rhythmic errors from auditory input we tested musicians on both kinds of deviations in a mismatch negativity (MMN) design. We found that MMN responses to a rhythmic deviation occurred at shorter latencies than MMN responses to a melodic deviation. Beamformer source analysis showed that the melodic deviation activated superior temporal, inferior frontal and superior frontal areas whereas the activation pattern of the rhythmic deviation focused more strongly on inferior and superior parietal areas, in addition to superior temporal cortex. Activation in the supplementary motor area occurred for both types of deviations. We also recorded responses to similar pitch and tempo deviations in a simple, non-musical repetitive tone pattern. In this case, there was no latency difference between the MMNs and cortical activation was smaller and mostly limited to auditory cortex. The results suggest that prediction and error detection of musical stimuli in trained musicians involve a broad cortical network and that rhythmic and melodic errors are processed in partially different cortical streams. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data

    NASA Astrophysics Data System (ADS)

    Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher

    2017-05-01

    Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance

  5. The ADaptation and Anticipation Model (ADAM) of sensorimotor synchronization

    PubMed Central

    van der Steen, M. C. (Marieke); Keller, Peter E.

    2013-01-01

    A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM. PMID:23772211

  6. Thermospheric density variations: Observability using precision satellite orbits and effects on orbit propagation

    NASA Astrophysics Data System (ADS)

    Lechtenberg, Travis; McLaughlin, Craig A.; Locke, Travis; Krishna, Dhaval Mysore

    2013-01-01

    paper examines atmospheric density estimated using precision orbit ephemerides (POE) from the CHAMP and GRACE satellites during short periods of greater atmospheric density variability. The results of the calibration of CHAMP densities derived using POEs with those derived using accelerometers are examined for three different types of density perturbations, [traveling atmospheric disturbances (TADs), geomagnetic cusp phenomena, and midnight density maxima] in order to determine the temporal resolution of POE solutions. In addition, the densities are compared to High-Accuracy Satellite Drag Model (HASDM) densities to compare temporal resolution for both types of corrections. The resolution for these models of thermospheric density was found to be inadequate to sufficiently characterize the short-term density variations examined here. Also examined in this paper is the effect of differing density estimation schemes by propagating an initial orbit state forward in time and examining induced errors. The propagated POE-derived densities incurred errors of a smaller magnitude than the empirical models and errors on the same scale or better than those incurred using the HASDM model.

  7. Geological Carbon Sequestration: A New Approach for Near-Surface Assurance Monitoring

    PubMed Central

    Wielopolski, Lucian

    2011-01-01

    There are two distinct objectives in monitoring geological carbon sequestration (GCS): Deep monitoring of the reservoir’s integrity and plume movement and near-surface monitoring (NSM) to ensure public health and the safety of the environment. However, the minimum detection limits of the current instrumentation for NSM is too high for detecting weak signals that are embedded in the background levels of the natural variations, and the data obtained represents point measurements in space and time. A new approach for NSM, based on gamma-ray spectroscopy induced by inelastic neutron scatterings (INS), offers novel and unique characteristics providing the following: (1) High sensitivity with a reducible error of measurement and detection limits, and, (2) temporal- and spatial-integration of carbon in soil that results from underground CO2 seepage. Preliminary field results validated this approach showing carbon suppression of 14% in the first year and 7% in the second year. In addition the temporal behavior of the error propagation is presented and it is shown that for a signal at the level of the minimum detection level the error asymptotically approaches 47%. PMID:21556180

  8. Circular carrier squeezing interferometry: Suppressing phase shift error in simultaneous phase-shifting point-diffraction interferometer

    NASA Astrophysics Data System (ADS)

    Zheng, Donghui; Chen, Lei; Li, Jinpeng; Sun, Qinyuan; Zhu, Wenhua; Anderson, James; Zhao, Jian; Schülzgen, Axel

    2018-03-01

    Circular carrier squeezing interferometry (CCSI) is proposed and applied to suppress phase shift error in simultaneous phase-shifting point-diffraction interferometer (SPSPDI). By introducing a defocus, four phase-shifting point-diffraction interferograms with circular carrier are acquired, and then converted into linear carrier interferograms by a coordinate transform. Rearranging the transformed interferograms into a spatial-temporal fringe (STF), so the error lobe will be separated from the phase lobe in the Fourier spectrum of the STF, and filtering the phase lobe to calculate the extended phase, when combined with the corresponding inverse coordinate transform, exactly retrieves the initial phase. Both simulations and experiments validate the ability of CCSI to suppress the ripple error generated by the phase shift error. Compared with carrier squeezing interferometry (CSI), CCSI is effective on some occasions in which a linear carrier is difficult to introduce, and with the added benefit of eliminating retrace error.

  9. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  10. Peripheral refraction in normal infant rhesus monkeys

    PubMed Central

    Hung, Li-Fang; Ramamirtham, Ramkumar; Huang, Juan; Qiao-Grider, Ying; Smith, Earl L.

    2008-01-01

    Purpose To characterize peripheral refractions in infant monkeys. Methods Cross-sectional data for horizontal refractions were obtained from 58 normal rhesus monkeys at 3 weeks of age. Longitudinal data were obtained for both the vertical and horizontal meridians from 17 monkeys. Refractive errors were measured by retinoscopy along the pupillary axis and at eccentricities of 15, 30, and 45 degrees. Axial dimensions and corneal power were measured by ultrasonography and keratometry, respectively. Results In infant monkeys, the degree of radial astigmatism increased symmetrically with eccentricity in all meridians. There were, however, initial nasal-temporal and superior-inferior asymmetries in the spherical-equivalent refractive errors. Specifically, the refractions in the temporal and superior fields were similar to the central ametropia, but the refractions in the nasal and inferior fields were more myopic than the central ametropia and the relative nasal field myopia increased with the degree of central hyperopia. With age, the degree of radial astigmatism decreased in all meridians and the refractions became more symmetrical along both the horizontal and vertical meridians; small degrees of relative myopia were evident in all fields. Conclusions As in adult humans, refractive error varied as a function of eccentricity in infant monkeys and the pattern of peripheral refraction varied with the central refractive error. With age, emmetropization occurred for both central and peripheral refractive errors resulting in similar refractions across the central 45 degrees of the visual field, which may reflect the actions of vision-dependent, growth-control mechanisms operating over a wide area of the posterior globe. PMID:18487366

  11. Local error estimates for adaptive simulation of the Reaction-Diffusion Master Equation via operator splitting.

    PubMed

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.

  12. Reachable Sets for Multiple Asteroid Sample Return Missions

    DTIC Science & Technology

    2005-12-01

    reduce the number of feasible asteroid targets. Reachable sets are defined in a reduced classical orbital element space. The boundary of this...Reachable sets are defined in a reduced classical orbital element space. The boundary of this reduced space is obtained by extremizing a family of...aliasing problems. Other coordinate elements , such as equinoctial elements , can provide a set of singularity-free slowly changing variables, but

  13. 78 FR 18808 - Addition of Certain Persons to the Entity List; Removal of Person From the Entity List Based on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ...) David Khayam, Apt 1811 Manchester Tower, Dubai Marina, Dubai, U.A.E.; and PO Box 111831, Al Daghaya... Rashed, Apt 1811 Manchester Tower, Dubai Marina, Dubai, U.A.E.; and PO Box 111831, Al Daghaya, Dubai, U.A... following two aliases: --Baet Alhoreya Electronics Trading; and --Baet Alhoreya, Apt 1811 Manchester Tower...

  14. 75 FR 40019 - In the Matter of the Review of the Designation of the Communist Party of the Philippines/New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... DEPARTMENT OF STATE [Public Notice: 7086] In the Matter of the Review of the Designation of the Communist Party of the Philippines/New People's Army (aka CPP/NPA and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of...

  15. Artifacts Of Spectral Analysis Of Instrument Readings

    NASA Technical Reports Server (NTRS)

    Wise, James H.

    1995-01-01

    Report presents experimental and theoretical study of some of artifacts introduced by processing outputs of two nominally identical low-frequency-reading instruments; high-sensitivity servo-accelerometers mounted together and operating, in conjunction with signal-conditioning circuits, as seismometers. Processing involved analog-to-digital conversion with anti-aliasing filtering, followed by digital processing including frequency weighting and computation of different measures of power spectral density (PSD).

  16. Effects of spectrometer band pass, sampling, and signal-to-noise ratio on spectral identification using the Tetracorder algorithm

    USGS Publications Warehouse

    Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.

    2003-01-01

    Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.

  17. Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.

    2016-12-01

    Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. To connect the analytical Sturm-Liouville results to observations, a first-order forcing consistent with a seasonally aliased Draconic or nodal lunar period (27.21d aliased into 2.36y) is applied. This has a plausible rationale as it ties a latitudinal forcing cycle via a cross-product to the longitudinal terms in the Laplace formulation. The fitted results match the features of QBO both qualitatively and quantitatively; adding second-order terms due to other seasonally aliased lunar periods provides finer detail while remaining consistent with the physical model. Further, running symbolic regression machine learning experiments on the data provided a validation to the approach, as it discovered the same analytical form and fitted values as the first principles Laplace model. These results conflict with Lindzen's QBO model, in that his original formulation fell short of making the lunar connection, even though Lindzen himself asserted "it is unlikely that lunar periods could be produced by anything other than the lunar tidal potential".By applying a similar analytical approach to ENSO, we find that the tidal equations need to be replaced with a Mathieu-equation formulation consistent with describing a sloshing process in the thermocline depth. Adapting the hydrodynamic math of sloshing, we find a biennial modulation coupled with angular momentum forcing variations matching the Chandler wobble gives an impressive match over the measured ENSO range of 1880 until the present. Lunar tidal periods and an additional triaxial nutation of 14 year period provide additional fidelity. The caveat is a phase inversion of the biennial mode lasting from 1980 to 1996. The parsimony of these analytical models arises from applying only known cyclic forcing terms to fundamental wave equation formulations. This raises the possibility that both QBO and ENSO can be predicted years in advance, apart from a metastable biennial phase inversion in ENSO.

  18. Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations

    DTIC Science & Technology

    2015-06-01

    using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central

  19. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions

    PubMed Central

    Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin

    2015-01-01

    Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964

  1. "Submesoscale Soup" Vorticity and Tracer Statistics During the Lateral Mixing Experiment

    NASA Astrophysics Data System (ADS)

    Shcherbina, A.; D'Asaro, E. A.; Lee, C. M.; Molemaker, J.; McWilliams, J. C.

    2012-12-01

    A detailed view of upper-ocean velocity, vorticity, and tracer statistics was obtained by a unique synchronized two-vessel survey in the North Atlantic in winter 2012. In winter, North Atlantic Mode water region south of the Gulf Stream is filled with an energetic, homogeneous, and well-developed submesoscale turbulence field - the "submesoscale soup". Turbulence in the soup is produced by frontogenesis and the surface layer instability of mesoscale eddy flows in the vicinity of the Gulf Stream. This region is a convenient representation of the inertial range of the geophysical turbulence forward cascade spanning scales of o(1-100km). During the Lateral Mixing Experiment in February-March 2012, R/Vs Atlantis and Knorr were run on parallel tracks 1 km apart for 500 km in the submesoscale soup region. Synchronous ADCP sampling provided the first in-situ estimates of full 3-D vorticity and divergence without the usual mix of spatial and temporal aliasing. Tracer distributions were also simultaneously sampled by both vessels using the underway and towed instrumentation. Observed vorticity distribution in the mixed layer was markedly asymmetric, with sparse strands of strong anticyclonic vorticity embedded in a weak, predominantly cyclonic background. While the mean vorticity was close to zero, distribution skewness exceeded 2. These observations confirm theoretical and numerical model predictions for an active submesoscale turbulence field. Submesoscale vorticity spectra also agreed well with the model prediction.

  2. Spring onset variations and long-term trends from new hemispheric-scale products and remote sensing

    NASA Astrophysics Data System (ADS)

    Dye, D. G.; Li, X.; Ault, T.; Zurita-Milla, R.; Schwartz, M. D.

    2015-12-01

    Spring onset is commonly characterized by plant phenophase changes among a variety of biophysical transitions and has important implications for natural and man-managed ecosystems. Here, we present a new integrated analysis of variability in gridded Northern Hemisphere spring onset metrics. We developed a set of hemispheric temperature-based spring indices spanning 1920-2013. As these were derived solely from meteorological data, they are used as a benchmark for isolating the climate system's role in modulating spring "green up" estimated from the annual cycle of normalized difference vegetation index (NDVI). Spatial patterns of interannual variations, teleconnections, and long-term trends were also analyzed in all metrics. At mid-to-high latitudes, all indices exhibit larger variability at interannual to decadal time scales than at spatial scales of a few kilometers. Trends of spring onset vary across space and time. However, compared to long-term trend, interannual to decadal variability generally accounts for a larger portion of the total variance in spring onset timing. Therefore, spring onset trends identified from short existing records may be aliased by decadal climate variations due to their limited temporal depth, even when these records span the entire satellite era. Based on our findings, we also demonstrated that our indices have skill in representing ecosystem-level spring phenology and may have important implications in understanding relationships between phenology, atmosphere dynamics and climate variability.

  3. Quantifying drivers of wild pig movement across multiple spatial and temporal scales

    USGS Publications Warehouse

    Kay, Shannon L.; Fischer, Justin W.; Monaghan, Andrew J.; Beasley, James C; Boughton, Raoul; Campbell, Tyler A; Cooper, Susan M; Ditchkoff, Stephen S.; Hartley, Stephen B.; Kilgo, John C; Wisely, Samantha M; Wyckoff, A Christy; Vercauteren, Kurt C.; Pipen, Kim M

    2017-01-01

    The analytical framework we present can be used to assess movement patterns arising from multiple data sources for a range of species while accounting for spatio-temporal correlations. Our analyses show the magnitude by which reaction norms can change based on the temporal scale of response data, illustrating the importance of appropriately defining temporal scales of both the movement response and covariates depending on the intended implications of research (e.g., predicting effects of movement due to climate change versus planning local-scale management). We argue that consideration of multiple spatial scales within the same framework (rather than comparing across separate studies post-hoc) gives a more accurate quantification of cross-scale spatial effects by appropriately accounting for error correlation.

  4. Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.

    PubMed

    George, Brandon; Aban, Inmaculada

    2015-01-15

    Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen

    2016-07-27

    Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.

  6. Illusory Reversal of Causality between Touch and Vision has No Effect on Prism Adaptation Rate.

    PubMed

    Tanaka, Hirokazu; Homma, Kazuhiro; Imamizu, Hiroshi

    2012-01-01

    Learning, according to Oxford Dictionary, is "to gain knowledge or skill by studying, from experience, from being taught, etc." In order to learn from experience, the central nervous system has to decide what action leads to what consequence, and temporal perception plays a critical role in determining the causality between actions and consequences. In motor adaptation, causality between action and consequence is implicitly assumed so that a subject adapts to a new environment based on the consequence caused by her action. Adaptation to visual displacement induced by prisms is a prime example; the visual error signal associated with the motor output contributes to the recovery of accurate reaching, and a delayed feedback of visual error can decrease the adaptation rate. Subjective feeling of temporal order of action and consequence, however, can be modified or even reversed when her sense of simultaneity is manipulated with an artificially delayed feedback. Our previous study (Tanaka et al., 2011; Exp. Brain Res.) demonstrated that the rate of prism adaptation was unaffected when the subjective delay of visual feedback was shortened. This study asked whether subjects could adapt to prism adaptation and whether the rate of prism adaptation was affected when the subjective temporal order was illusory reversed. Adapting to additional 100 ms delay and its sudden removal caused a positive shift of point of simultaneity in a temporal order judgment experiment, indicating an illusory reversal of action and consequence. We found that, even in this case, the subjects were able to adapt to prism displacement with the learning rate that was statistically indistinguishable to that without temporal adaptation. This result provides further evidence to the dissociation between conscious temporal perception and motor adaptation.

  7. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

  8. Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain

    PubMed Central

    Schwartz, Myrna F.; Kimberg, Daniel Y.; Walker, Grant M.; Brecher, Adelyn; Faseyitan, Olufunsho K.; Dell, Gary S.; Mirman, Daniel; Coslett, H. Branch

    2011-01-01

    It is thought that semantic memory represents taxonomic information differently from thematic information. This study investigated the neural basis for the taxonomic-thematic distinction in a unique way. We gathered picture-naming errors from 86 individuals with poststroke language impairment (aphasia). Error rates were determined separately for taxonomic errors (“pear” in response to apple) and thematic errors (“worm” in response to apple), and their shared variance was regressed out of each measure. With the segmented lesions normalized to a common template, we carried out voxel-based lesion-symptom mapping on each error type separately. We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic-thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain. PMID:21540329

  9. Role of color memory in successive color constancy.

    PubMed

    Ling, Yazhu; Hurlbert, Anya

    2008-06-01

    We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.

  10. The cerebellum for jocks and nerds alike.

    PubMed

    Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J

    2014-01-01

    Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.

  11. The cerebellum for jocks and nerds alike

    PubMed Central

    Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.

    2014-01-01

    Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338

  12. Truncation of Spherical Harmonic Series and its Influence on Gravity Field Modelling

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Gruber, T.; Rummel, R.

    2009-04-01

    Least-squares adjustment is a very common and effective tool for the calculation of global gravity field models in terms of spherical harmonic series. However, since the gravity field is a continuous field function its optimal representation by a finite series of spherical harmonics is connected with a set of fundamental problems. Particularly worth mentioning here are cut off errors and aliasing effects. These problems stem from the truncation of the spherical harmonic series and from the fact that the spherical harmonic coefficients cannot be determined independently of each other within the adjustment process in case of discrete observations. The latter is shown by the non-diagonal variance-covariance matrices of gravity field solutions. Sneeuw described in 1994 that the off-diagonal matrix elements - at least if data are equally weighted - are the result of a loss of orthogonality of Legendre polynomials on regular grids. The poster addresses questions arising from the truncation of spherical harmonic series in spherical harmonic analysis and synthesis. Such questions are: (1) How does the high frequency data content (outside the parameter space) affect the estimated spherical harmonic coefficients; (2) Where to truncate the spherical harmonic series in the adjustment process in order to avoid high frequency leakage?; (3) Given a set of spherical harmonic coefficients resulting from an adjustment, what is the effect of using only a truncated version of it?

  13. Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.

    PubMed

    Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-09-18

    High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  14. High-accuracy 3D Fourier forward modeling of gravity field based on the Gauss-FFT technique

    NASA Astrophysics Data System (ADS)

    Zhao, Guangdong; Chen, Bo; Chen, Longwei; Liu, Jianxin; Ren, Zhengyong

    2018-03-01

    The 3D Fourier forward modeling of 3D density sources is capable of providing 3D gravity anomalies coincided with the meshed density distribution within the whole source region. This paper firstly derives a set of analytical expressions through employing 3D Fourier transforms for calculating the gravity anomalies of a 3D density source approximated by right rectangular prisms. To reduce the errors due to aliasing and imposed periodicity as well as edge effects in the Fourier domain modeling, we develop the 3D Gauss-FFT technique to the 3D gravity anomalies forward modeling. The capability and adaptability of this scheme are tested by simple synthetic models. The results show that the accuracy of the Fourier forward methods using the Gauss-FFT with 4 Gaussian-nodes (or more) is comparable to that of the spatial modeling. In addition, the "ghost" source effects in the 3D Fourier forward gravity field due to imposed periodicity of the standard FFT algorithm are remarkably depressed by the application of the 3D Gauss-FFT algorithm. More importantly, the execution times of the 4 nodes Gauss-FFT modeling are reduced by two orders of magnitude compared with the spatial forward method. It demonstrates that the improved Fourier method is an efficient and accurate forward modeling tool for the gravity field.

  15. Two-dimensional energy spectra in a high Reynolds number turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Chandran, Dileep; Baidya, Rio; Monty, Jason; Marusic, Ivan

    2016-11-01

    The current study measures the two-dimensional (2D) spectra of streamwise velocity component (u) in a high Reynolds number turbulent boundary layer for the first time. A 2D spectra shows the contribution of streamwise (λx) and spanwise (λy) length scales to the streamwise variance at a given wall height (z). 2D spectra could be a better tool to analyse spectral scaling laws as it is devoid of energy aliasing errors that could be present in one-dimensional spectra. A novel method is used to calculate the 2D spectra from the 2D correlation of u which is obtained by measuring velocity time series at various spanwise locations using hot-wire anemometry. At low Reynolds number, the shape of the 2D spectra at a constant energy level shows λy √{ zλx } behaviour at larger scales which is in agreement with the literature. However, at high Reynolds number, it is observed that the square-root relationship gradually transforms into a linear relationship (λy λx) which could be caused by the large packets of eddies whose length grows proportionately to the growth of its width. Additionally, we will show that this linear relationship observed at high Reynolds number is consistent with attached eddy predictions. The authors gratefully acknowledge the support from the Australian Research Council.

  16. Using HFMEA to assess potential for patient harm from tubing misconnections.

    PubMed

    Kimehi-Woods, Judy; Shultz, John P

    2006-07-01

    Reported cases of tubing misconnections and other tubing errors prompted Columbus Children's Hospital to study their potential for harm in its patient population. A Health Failure Mode and Effects Analysis (HFMEA) was conducted in October 2004 to determine the risks inherent in the use and labeling of various enteral, parenteral, and other tubing types in patient care and the potential for patient harm. An assessment of the practice culture revealed considerable variability among nurses and respiratory therapists within and between units. Work on an HFMEA culminated in recommendations of risk reduction strategies. These included standardizing the process of labeling of tubing throughout the organization, developing an online pictorial catalog to list available tubing supplies with all aliases used by staff, and conducting an inventory of all supplies to identify products that need to be purchased or discontinued. Three groups are working on implementing each of the recommendations. Most of the results already realized occurred in labeling of tubing. The pediatric intensive care unit labels all tubing with infused medications 85% of the time; tubings inserted during surgery or in interventional radiology are labeled 53% and 93% of the time. Pocket-size cards with printed labels were tested in three units. This proactive risk assessment project has identified failure modes and possible causes and solutions; several recommendations have been implemented. No tubing misconnections have been reported.

  17. Effect of Divided Attention on Children's Rhythmic Response

    ERIC Educational Resources Information Center

    Thomas, Jerry R.; Stratton, Richard K.

    1977-01-01

    Audio and visual interference did not significantly impair rhythmic response levels of second- and fourth-grade boys as measured by space error scores, though audio input resulted in significantly less consistent temporal performance. (MB)

  18. A multi-pixel InSAR time series analysis method: Simultaneous estimation of atmospheric noise, orbital errors and deformation

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2016-12-01

    InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.

  19. An exploratory study of temporal integration in the peripheral retina of myopes

    NASA Astrophysics Data System (ADS)

    Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.

    2017-08-01

    The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.

  20. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  1. Temporal characteristics of imagined and actual walking in frail older adults.

    PubMed

    Nakano, Hideki; Murata, Shin; Shiraiwa, Kayoko; Iwase, Hiroaki; Kodama, Takayuki

    2018-05-09

    Mental chronometry, commonly used to evaluate motor imagery ability, measures the imagined time required for movements. Previous studies investigating mental chronometry of walking have investigated healthy older adults. However, mental chronometry in frail older adults has not yet been clarified. To investigate temporal characteristics of imagined and actual walking in frail older adults. We investigated the time required for imagined and actual walking along three walkways of different widths [width(s): 50, 25, 15 cm × length: 5 m] in 29 frail older adults and 20 young adults. Imagined walking was measured with mental chronometry. We observed significantly longer imagined and actual walking times along walkways of 50, 25, and 15 cm width in frail older adults compared with young adults. Moreover, temporal differences (absolute error) between imagined and actual walking were significantly greater in frail older adults than in young adults along walkways with a width of 25 and 15 cm. Furthermore, we observed significant differences in temporal differences (constant error) between frail older adults and young adults for walkways with a width of 25 and 15 cm. Frail older adults tended to underestimate actual walking time in imagined walking trials. Our results suggest that walkways of different widths may be a useful tool to evaluate age-related changes in imagined and actual walking in frail older adults.

  2. Comparison of the resulting error in data fusion techniques when used with remote sensing, earth observation, and in-situ data sets for water quality applications

    NASA Astrophysics Data System (ADS)

    Ziemba, Alexander; El Serafy, Ghada

    2016-04-01

    Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.

  3. Radiometric & Geometric normalization of Sentinel optical data and VHR data to build-up time-series, an example in Tonga for the monitoring of mangrove health vs. climate change

    NASA Astrophysics Data System (ADS)

    Serra, Romain; Valette, Anne; Taji, Amine; Emsley, Stephen

    2017-04-01

    Building climate resilience (i.e. climate change adaptation or self-renew of ecosystems) or planning environment rehabilitations and nature-based solutions to address their vulnerabilities to disturbances has prerequisites: 1- identify the disorder, i.e. stresses caused by events such as hurricanes, tsunamis, heavy rains, hailstone falls, smog… or piled-up along-time such as warming, rainfalls, ocean acidification, soil salinization… and measured by trends; and 2- qualify its impact on the ecosystems, i.e. the resulting strains. Mitigation of threats is accordingly twofold, i. on locally temporal scales for protection, ii. on long scale for prevention and sustainability. For assessment and evaluation prior to design future scenarios, it requires concomitant acquisition of (a) climate data at global and local spatial scale which describe the changes at the various temporal scales of phenomena without signal aliasing, and of (b) the ecosystems' status at the scales of the forcing and of relaxation times, hysteresis lags, periodicities of orbits in chaotic systems, shifts from one attractor in ecosystems to the others, etc. Dissociating groups of timescales and spatial scales facilitates the analysis and help set-up monitoring schemes. The Sentinel-2 mission, with a revisit of the earth every few days and a 10m resolution on-ground is a good automatic spectro-analytical monitoring system because detecting changes in numerous optical & IR bands at proper spatial scales for the description of land parcels. Combined with photo-interpreted VHR data which describe the environment more crudely but with high precision of land parcels' border locations, it helps find the relationship between stress and strains to empirically understand the relationships. An example is provided for Tonga, courtesy of ESA support and ADB request, with a focus on time-series' consistency that requires radiometric and geometric normalisation of EO data sets. Methodologies have been developed in the frame of ESA programs and EC program (H2020 Co-Resyf).

  4. Low Noise Infrasonic Sensor System with High Reduction of Natural Background Noise

    DTIC Science & Technology

    2006-05-01

    local processing allows a variety of options both in the array geometry and signal processing. A generic geometry is indicated in Figure 2. Geometric...higher frequency sound detected . Table 1 provides a comparison of piezocable and microbarograph based arrays . Piezocable Sensor Local Signal ...aliasing associated with the current infrasound sensors used at large spacing in the present designs of infrasound monitoring arrays , particularly in the

  5. Finite Element Analysis of Lamb Waves Acting within a Thin Aluminum Plate

    DTIC Science & Technology

    2007-09-01

    signal to avoid time aliasing % LambWaveMode % lamb wave mode to simulate; use proper phase velocity curve % thickness % thickness of...analysis of the simulated signal response data demonstrated that elevated temperatures delay wave propagation, although the delays are minimal at the...Echo Techniques Ultrasonic NDE techniques are based on the propagation and reflection of elastic waves , with the assumption that damage in the

  6. Exploring the Acoustic Nonlinearity for Monitoring Complex Aerospace Structures

    DTIC Science & Technology

    2008-02-27

    nonlinear elastic waves, embedded ultrasonics, nonlinear diagnostics, aerospace structures, structural joints. 16. SECURITY CLASSIFICATION OF: 17...sampling, 100 MHz bandwidth with noise and anti- aliasing filters, general-purpose alias-protected decimation for all sample rates and quad digital down...conversion ( DDC ) with up to 40 MHz IF bandwidth. Specified resolution of NI PXI 5142 is 14-bits with the noise floor approaching -85 dB. Such a

  7. An Evaluation of the TRIPS Computer System (Extended Technical Report)

    DTIC Science & Technology

    2008-07-08

    Mario Marino Nitya Ranganathan Behnam Robatmili Aaron Smith James Burrill Stephen W. Keckler Doug Burger Kathryn S. McKinley Computer Architecture and...Marino, Nitya Ranganathan , Behnam Robatmili, Aaron Smith, James Burrill, Stephen W. Keckler, Doug Burger, Kathryn S. McKinley; ASPLOS 2009, Washington DC...aggressively register allo- cate more memory accesses by using programmer knowledge about pointer aliasing, much of which may be automated. They also

  8. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  9. Context dependent anti-aliasing image reconstruction

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.; Hunt, A.; Arlia, N.

    1989-01-01

    Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.

  10. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  11. POCS-based reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE): a general algorithm for reducing motion-related artifacts

    PubMed Central

    Chu, Mei-Lan; Chang, Hing-Chiu; Chung, Hsiao-Wen; Truong, Trong-Kha; Bashir, Mustafa R.; Chen, Nan-kuei

    2014-01-01

    Purpose A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion weighted imaging (DWI). Theory Images with reduced artifacts are reconstructed with an iterative POCS procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. Methods The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved DWI data corresponding to different k-space trajectories and matrix condition numbers. Results Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. Conclusion POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods. PMID:25394325

  12. A technology review of time-of-flight photon counting for advanced remote sensing

    NASA Astrophysics Data System (ADS)

    Lamb, Robert A.

    2010-04-01

    Time correlated single photon counting (TCSPC) has made tremendous progress during the past ten years enabling improved performance in precision time-of-flight (TOF) rangefinding and lidar. In this review the development and performance of several ranging systems is presented that use TCSPC for accurate ranging and range profiling over distances up to 17km. A range resolution of a few millimetres is routinely achieved over distances of several kilometres. These systems include single wavelength devices operating in the visible; multi-wavelength systems covering the visible and near infra-red; the use of electronic gating to reduce in-band solar background and, most recently, operation at high repetition rates without range aliasing- typically 10MHz over several kilometres. These systems operate at very low optical power (<100μW). The technique therefore has potential for eye-safe lidar monitoring of the environment and obvious military, security and surveillance sensing applications. The review will highlight the theoretical principles of photon counting and progress made in developing absolute ranging techniques that enable high repetition rate data acquisition that avoids range aliasing. Technology trends in TCSPC rangefinding are merging with those of quantum cryptography and its future application to revolutionary quantum imaging provides diverse and exciting research into secure covert sensing, ultra-low power active imaging and quantum rangefinding.

  13. A novel x-ray detector design with higher DQE and reduced aliasing: Theoretical analysis of x-ray reabsoprtion in detector converter material

    NASA Astrophysics Data System (ADS)

    Nano, Tomi; Escartin, Terenz; Karim, Karim S.; Cunningham, Ian A.

    2016-03-01

    The ability to improve visualization of structural information in digital radiography without increasing radiation exposures requires improved image quality across all spatial frequencies, especially at high frequencies. The detective quantum efficiency (DQE) as a function of spatial frequency quantifies image quality given by an x-ray detector. We present a method of increasing DQE at high spatial frequencies by improving the modulation transfer function (MTF) and reducing noise aliasing. The Apodized Aperature Pixel (AAP) design uses a detector with micro-elements to synthesize desired pixels and provide higher DQE than conventional detector designs. A cascaded system analysis (CSA) that incorporates x-ray interactions is used for comparison of the theoretical MTF, noise power spectrum (NPS), and DQE. Signal and noise transfer through the converter material is shown to consist of correlated an uncorrelated terms. The AAP design was shown to improve the DQE of both material types that have predominantly correlated transfer (such as CsI) and predominantly uncorrelated transfer (such as Se). Improvement in the MTF by 50% and the DQE by 100% at the sampling cut-off frequency is obtained when uncorrelated transfer is prevalent through the converter material. Optimizing high-frequency DQE results in improved image contrast and visualization of small structures and fine-detail.

  14. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  15. A new multiscale noise tuning stochastic resonance for enhanced fault diagnosis in wind turbine drivetrains

    NASA Astrophysics Data System (ADS)

    Hu, Bingbing; Li, Bing

    2016-02-01

    It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.

  16. [Object Separation from Medical X-Ray Images Based on ICA].

    PubMed

    Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun

    2015-03-01

    X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.

  17. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  18. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  19. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; ...

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less

  20. Sources of Phoneme Errors in Repetition: Perseverative, Neologistic, and Lesion Patterns in Jargon Aphasia

    PubMed Central

    Pilkington, Emma; Keidel, James; Kendrick, Luke T.; Saddy, James D.; Sage, Karen; Robson, Holly

    2017-01-01

    This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI) of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed. PMID:28522967

  1. Temporal and spatial variation in allocating annual traffic activity across an urban region and implications for air quality assessments

    PubMed Central

    Batterman, Stuart

    2015-01-01

    Patterns of traffic activity, including changes in the volume and speed of vehicles, vary over time and across urban areas and can substantially affect vehicle emissions of air pollutants. Time-resolved activity at the street scale typically is derived using temporal allocation factors (TAFs) that allow the development of emissions inventories needed to predict concentrations of traffic-related air pollutants. This study examines the spatial and temporal variation of TAFs, and characterizes prediction errors resulting from their use. Methods are presented to estimate TAFs and their spatial and temporal variability and used to analyze total, commercial and non-commercial traffic in the Detroit, Michigan, U.S. metropolitan area. The variability of total volume estimates, quantified by the coefficient of variation (COV) representing the percentage departure from expected hourly volume, was 21, 33, 24 and 33% for weekdays, Saturdays, Sundays and holidays, respectively. Prediction errors mostly resulted from hour-to-hour variability on weekdays and Saturdays, and from day-to-day variability on Sundays and holidays. Spatial variability was limited across the study roads, most of which were large freeways. Commercial traffic had different temporal patterns and greater variability than noncommercial vehicle traffic, e.g., the weekday variability of hourly commercial volume was 28%. The results indicate that TAFs for a metropolitan region can provide reasonably accurate estimates of hourly vehicle volume on major roads. While vehicle volume is only one of many factors that govern on-road emission rates, air quality analyses would be strengthened by incorporating information regarding the uncertainty and variability of traffic activity. PMID:26688671

  2. Biases in Time-Averaged Field and Paleosecular Variation Studies

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Constable, C.

    2009-12-01

    Challenges to constructing time-averaged field (TAF) and paleosecular variation (PSV) models of Earth’s magnetic field over million year time scales are the uneven geographical and temporal distribution of paleomagnetic data and the absence of full vector records of the magnetic field variability at any given site. Recent improvements in paleomagnetic data sets now allow regional assessment of the biases introduced by irregular temporal sampling and the absence of full vector information. We investigate these effects over the past few Myr for regions with large paleomagnetic data sets, where the TAF and/or PSV have been of previous interest (e.g., significant departures of the TAF from the field predicted by a geocentric axial dipole). We calculate the effects of excluding paleointensity data from TAF calculations, and find these to be small. For example, at Hawaii, we find that for the past 50 ka, estimates of the TAF direction are minimally affected if only paleodirectional data versus the full paleofield vector are used. We use resampling techniques to investigate biases incurred by the uneven temporal distribution. Key to the latter issue is temporal information on a site-by-site basis. At Hawaii, resampling of the paleodirectional data onto a uniform temporal distribution, assuming no error in the site ages, reduces the magnitude of the inclination anomaly for the Brunhes, Gauss and Matuyama epochs. However inclusion of age errors in the sampling procedure leads to TAF estimates that are close to those reported for the original data sets. We discuss the implications of our results for global field models.

  3. Global Vertical Rates from VLBl

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; MacMillan, D.; Petrov, L.

    2003-01-01

    The analysis of global VLBI observations provides vertical rates for 50 sites with formal errors less than 2 mm/yr and median formal error of 0.4 mm/yr. These sites are largely in Europe and North America with a few others in east Asia, Australia, South America and South Africa. The time interval of observations is up to 20 years. The error of the velocity reference frame is less than 0.5 mm/yr, but results from several sites with observations from more than one antenna suggest that the estimated vertical rates may have temporal variations or non-geophysical components. Comparisons with GPS rates and corresponding site position time series will be discussed.

  4. Spatio-temporal networks: reachability, centrality and robustness.

    PubMed

    Williams, Matthew J; Musolesi, Mirco

    2016-06-01

    Recent advances in spatial and temporal networks have enabled researchers to more-accurately describe many real-world systems such as urban transport networks. In this paper, we study the response of real-world spatio-temporal networks to random error and systematic attack, taking a unified view of their spatial and temporal performance. We propose a model of spatio-temporal paths in time-varying spatially embedded networks which captures the property that, as in many real-world systems, interaction between nodes is non-instantaneous and governed by the space in which they are embedded. Through numerical experiments on three real-world urban transport systems, we study the effect of node failure on a network's topological, temporal and spatial structure. We also demonstrate the broader applicability of this framework to three other classes of network. To identify weaknesses specific to the behaviour of a spatio-temporal system, we introduce centrality measures that evaluate the importance of a node as a structural bridge and its role in supporting spatio-temporally efficient flows through the network. This exposes the complex nature of fragility in a spatio-temporal system, showing that there is a variety of failure modes when a network is subject to systematic attacks.

  5. Monitoring of land subsidence and ground fissures in Xian, China 2005-2006: Mapped by sar Interferometry

    USGS Publications Warehouse

    Zhao, C.Y.; Zhang, Q.; Ding, X.-L.; Lu, Z.; Yang, C.S.; Qi, X.M.

    2009-01-01

    The City of Xian, China, has been experiencing significant land subsidence and ground fissure activities since 1960s, which have brought various severe geohazards including damages to buildings, bridges and other facilities. Monitoring of land subsidence and ground fissure activities can provide useful information for assessing the extent of, and mitigating such geohazards. In order to achieve robust Synthetic Aperture Radar Interferometry (InSAR) results, six interferometric pairs of Envisat ASAR data covering 2005–2006 are collected to analyze the InSAR processing errors firstly, such as temporal and spatial decorrelation error, external DEM error, atmospheric error and unwrapping error. Then the annual subsidence rate during 2005–2006 is calculated by weighted averaging two pairs of D-InSAR results with similar time spanning. Lastly, GPS measurements are applied to calibrate the InSAR results and centimeter precision is achieved. As for the ground fissure monitoring, five InSAR cross-sections are designed to demonstrate the relative subsidence difference across ground fissures. In conclusion, the final InSAR subsidence map during 2005–2006 shows four large subsidence zones in Xian hi-tech zones in western, eastern and southern suburbs of Xian City, among which two subsidence cones are newly detected and two ground fissures are deduced to be extended westward in Yuhuazhai subsidence cone. This study shows that the land subsidence and ground fissures are highly correlated spatially and temporally and both are correlated with hi-tech zone construction in Xian during the year of 2005–2006.

  6. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  7. Improving z-tracking accuracy in the two-photon single-particle tracking microscope.

    PubMed

    Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.

  8. Understanding The Neural Mechanisms Involved In Sensory Control Of Voice Production

    PubMed Central

    Parkinson, Amy L.; Flagmeier, Sabina G.; Manes, Jordan L.; Larson, Charles R.; Rogers, Bill; Robin, Donald A.

    2012-01-01

    Auditory feedback is important for the control of voice fundamental frequency (F0). In the present study we used neuroimaging to identify regions of the brain responsible for sensory control of the voice. We used a pitch-shift paradigm where subjects respond to an alteration, or shift, of voice pitch auditory feedback with a reflexive change in F0. To determine the neural substrates involved in these audio-vocal responses, subjects underwent fMRI scanning while vocalizing with or without pitch-shifted feedback. The comparison of shifted and unshifted vocalization revealed activation bilaterally in the superior temporal gyrus (STG) in response to the pitch shifted feedback. We hypothesize that the STG activity is related to error detection by auditory error cells located in the superior temporal cortex and efference copy mechanisms whereby this region is responsible for the coding of a mismatch between actual and predicted voice F0. PMID:22406500

  9. Time dependent wind fields

    NASA Technical Reports Server (NTRS)

    Chelton, D. B.

    1986-01-01

    Two tasks were performed: (1) determination of the accuracy of Seasat scatterometer, altimeter, and scanning multichannel microwave radiometer measurements of wind speed; and (2) application of Seasat altimeter measurements of sea level to study the spatial and temporal variability of geostrophic flow in the Antarctic Circumpolar Current. The results of the first task have identified systematic errors in wind speeds estimated by all three satellite sensors. However, in all cases the errors are correctable and corrected wind speeds agree between the three sensors to better than 1 ms sup -1 in 96-day 2 deg. latitude by 6 deg. longitude averages. The second task has resulted in development of a new technique for using altimeter sea level measurements to study the temporal variability of large scale sea level variations. Application of the technique to the Antarctic Circumpolar Current yielded new information about the ocean circulation in this region of the ocean that is poorly sampled by conventional ship-based measurements.

  10. Complex phase error and motion estimation in synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Soumekh, M.; Yang, H.

    1991-06-01

    Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.

  11. Unbiased estimation of oceanic mean rainfall from satellite borne radiometer measurements

    NASA Technical Reports Server (NTRS)

    Mittal, M. C.

    1981-01-01

    The statistical properties of the radar derived rainfall obtained during the GARP Atlantic Tropical Experiment (GATE) are used to derive quantitative estimates of the spatial and temporal sampling errors associated with estimating rainfall from brightness temperature measurements such as would be obtained from a satelliteborne microwave radiometer employing a practical size antenna aperture. A basis for a method of correcting the so called beam filling problem, i.e., for the effect of nonuniformity of rainfall over the radiometer beamwidth is provided. The method presented employs the statistical properties of the observations themselves without need for physical assumptions beyond those associated with the radiative transfer model. The simulation results presented offer a validation of the estimated accuracy that can be achieved and the graphs included permit evaluation of the effect of the antenna resolution on both the temporal and spatial sampling errors.

  12. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less

  13. Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making

    PubMed Central

    Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.

    2016-01-01

    Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272

  14. Evolution of Altimetry Calibration and Future Challenges

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Haines, Bruce J.

    2012-01-01

    Over the past 20 years, altimetry calibration has evolved from an engineering-oriented exercise to a multidisciplinary endeavor driving the state of the art. This evolution has been spurred by the developing promise of altimetry to capture the large-scale, but small-amplitude, changes of the ocean surface containing the expression of climate change. The scope of altimeter calibration/validation programs has expanded commensurately. Early efforts focused on determining a constant range bias and verifying basic compliance of the data products with mission requirements. Contemporary investigations capture, with increasing accuracies, the spatial and temporal characteristics of errors in all elements of the measurement system. Dedicated calibration sites still provide the fundamental service of estimating absolute bias, but also enable long-term monitoring of the sea-surface height and constituent measurements. The use of a network of island and coastal tide gauges has provided the best perspective on the measurement stability, and revealed temporal variations of altimeter measurement system drift. The cross-calibration between successive missions provided fundamentally new information on the performance of altimetry systems. Spatially and temporally correlated errors pose challenges for future missions, underscoring the importance of cross-calibration of new measurements against the established record.

  15. Rotation Rate of Saturn's Magnetosphere using CAPS Plasma Measurements

    NASA Technical Reports Server (NTRS)

    Sittler, E.; Cooper, J.; Hartle, R.; Simpson, D.; Johnson, R.; Thomsen, M.; Arridge, C.

    2011-01-01

    We present the present status of an investigation of the rotation rate of Saturn's magnetosphere using a 3D velocity moment technique being developed at Goddard which is similar to the 2D version used by Sittler et al. for SOI and similar to that used by Thomsen et al.. This technique allows one to nearly cover the full energy range of the Cassini Plasma Spectrometer (CAPS) IMS from 1 V . E/Q < 50 kV. Since our technique maps the observations into a local inertial frame, it does work during roll maneuvers. We make comparisons with the bi-Maxwellian fitting technique developed by Wilson et al. and the similar velocity moment technique by Thomsen et al. . We concentrate our analysis when ion composition data is available, which is used to weight the non-compositional data, referred to as singles data, to separate H+, H2+ and water group ions (W+) from each other. The chosen periods have high enough telemetry rates (4 kbps or higher) so that coincidence ion data, similar to that used by Sittler et al. for SOI is available. The ion data set is especially valuable for measuring flow velocities for protons, which are more difficult to derive using singles data within the inner magnetosphere, where the signal is dominated by heavy ions (i.e., proton peak merges with W+ peak as low energy shoulder). Our technique uses a flux function, which is zero in the proper plasma flow frame, to estimate fluid parameter uncertainties. The comparisons investigate the experimental errors and potential for systematic errors in the analyses, including ours. The rolls provide the best data set when it comes to getting 4PI coverage of the plasma but are more susceptible to time aliasing effects. In the future we will then make comparisons with magnetic field observations, Saturn ionosphere conductivities as presently known and the field aligned currents necessary for the planet to enforce corotation of the rotating plasma.

  16. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  18. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  19. Regional Brain Dysfunction Associated with Semantic Errors in Comprehension.

    PubMed

    Shahid, Hinna; Sebastian, Rajani; Tippett, Donna C; Saxena, Sadhvi; Wright, Amy; Hanayik, Taylor; Breining, Bonnie; Bonilha, Leonardo; Fridriksson, Julius; Rorden, Chris; Hillis, Argye E

    2018-02-01

    Here we illustrate how investigation of individuals acutely after stroke, before structure/function reorganization through recovery or rehabilitation, can be helpful in answering questions about the role of specific brain regions in language functions. Although there is converging evidence from a variety of sources that the left posterior-superior temporal gyrus plays some role in spoken word comprehension, its precise role in this function has not been established. We hypothesized that this region is essential for distinguishing between semantically related words, because it is critical for linking the spoken word to the complete semantic representation. We tested this hypothesis in 127 individuals with 48 hours of acute ischemic stroke, before the opportunity for reorganization or recovery. We identified tissue dysfunction (acute infarct and/or hypoperfusion) in gray and white matter parcels of the left hemisphere, and we evaluated the association between rate of semantic errors in a word-picture verification tasks and extent of tissue dysfunction in each region. We found that after correcting for lesion volume and multiple comparisons, the rate of semantic errors correlated with the extent of tissue dysfunction in left posterior-superior temporal gyrus and retrolenticular white matter. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  1. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  2. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis

    PubMed Central

    Tabelow, Karsten; König, Reinhard; Polzehl, Jörg

    2016-01-01

    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809

  3. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  4. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  5. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  6. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  7. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J.

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison withmore » normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking performance. A future study will investigate spatial uniformity of motion and its effect on the motion estimation errors.« less

  8. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  9. [Allocation of attentional resource and monitoring processes under rapid serial visual presentation].

    PubMed

    Nishiura, K

    1998-08-01

    With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.

  10. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  11. Predicting forest insect flight activity: A Bayesian network approach

    PubMed Central

    Pawson, Stephen M.; Marcot, Bruce G.; Woodberry, Owen G.

    2017-01-01

    Daily flight activity patterns of forest insects are influenced by temporal and meteorological conditions. Temperature and time of day are frequently cited as key drivers of activity; however, complex interactions between multiple contributing factors have also been proposed. Here, we report individual Bayesian network models to assess the probability of flight activity of three exotic insects, Hylurgus ligniperda, Hylastes ater, and Arhopalus ferus in a managed plantation forest context. Models were built from 7,144 individual hours of insect sampling, temperature, wind speed, relative humidity, photon flux density, and temporal data. Discretized meteorological and temporal variables were used to build naïve Bayes tree augmented networks. Calibration results suggested that the H. ater and A. ferus Bayesian network models had the best fit for low Type I and overall errors, and H. ligniperda had the best fit for low Type II errors. Maximum hourly temperature and time since sunrise had the largest influence on H. ligniperda flight activity predictions, whereas time of day and year had the greatest influence on H. ater and A. ferus activity. Type II model errors for the prediction of no flight activity is improved by increasing the model’s predictive threshold. Improvements in model performance can be made by further sampling, increasing the sensitivity of the flight intercept traps, and replicating sampling in other regions. Predicting insect flight informs an assessment of the potential phytosanitary risks of wood exports. Quantifying this risk allows mitigation treatments to be targeted to prevent the spread of invasive species via international trade pathways. PMID:28953904

  12. Errors on interrupter tasks presented during spatial and verbal working memory performance are linearly linked to large-scale functional network connectivity in high temporal resolution resting state fMRI.

    PubMed

    Magnuson, Matthew Evan; Thompson, Garth John; Schwarb, Hillary; Pan, Wen-Ju; McKinley, Andy; Schumacher, Eric H; Keilholz, Shella Dawn

    2015-12-01

    The brain is organized into networks composed of spatially separated anatomical regions exhibiting coherent functional activity over time. Two of these networks (the default mode network, DMN, and the task positive network, TPN) have been implicated in the performance of a number of cognitive tasks. To directly examine the stable relationship between network connectivity and behavioral performance, high temporal resolution functional magnetic resonance imaging (fMRI) data were collected during the resting state, and behavioral data were collected from 15 subjects on different days, exploring verbal working memory, spatial working memory, and fluid intelligence. Sustained attention performance was also evaluated in a task interleaved between resting state scans. Functional connectivity within and between the DMN and TPN was related to performance on these tasks. Decreased TPN resting state connectivity was found to significantly correlate with fewer errors on an interrupter task presented during a spatial working memory paradigm and decreased DMN/TPN anti-correlation was significantly correlated with fewer errors on an interrupter task presented during a verbal working memory paradigm. A trend for increased DMN resting state connectivity to correlate to measures of fluid intelligence was also observed. These results provide additional evidence of the relationship between resting state networks and behavioral performance, and show that such results can be observed with high temporal resolution fMRI. Because cognitive scores and functional connectivity were collected on nonconsecutive days, these results highlight the stability of functional connectivity/cognitive performance coupling.

  13. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  14. Visualization of 3D CT-based anatomical models

    NASA Astrophysics Data System (ADS)

    Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.

    2018-04-01

    Biomedical volumetric data visualization techniques for the exploration purposes are well developed. Most of the known methods are inappropriate for surgery simulation systems due to lack of realism. A segmented data visualization is a well-known approach for the visualization of the structured volumetric data. The research is focused on improvement of the segmented data visualization technique by the aliasing problems resolution and the use of material transparency modeling for better semitransparent structures rendering.

  15. Spatial Computation

    DTIC Science & Technology

    2003-12-01

    POPL), pages 146–157, 1988 . 207 [HT01] Nevin Heintze and Olivier Tardieu. Ultra-fast aliasing analysis using CLA: A million lines of C code in a second...provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently...RESPONSIBLE PERSON a . REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39

  16. Range safety signal propagation through the SRM exhaust plume of the space shuttle

    NASA Technical Reports Server (NTRS)

    Boynton, F. P.; Davies, A. R.; Rajasekhar, P. S.; Thompson, J. A.

    1977-01-01

    Theoretical predictions of plume interference for the space shuttle range safety system by solid rocket booster exhaust plumes are reported. The signal propagation was calculated using a split operator technique based upon the Fresnel-Kirchoff integral, using fast Fourier transforms to evaluate the convolution and treating the plume as a series of absorbing and phase-changing screens. Talanov's lens transformation was applied to reduce aliasing problems caused by ray divergence.

  17. Morphological demosaicking

    NASA Astrophysics Data System (ADS)

    Quan, Shuxue

    2009-02-01

    Bayer patterns, in which a single value of red, green or blue is available for each pixel, are widely used in digital color cameras. The reconstruction of the full color image is often referred to as demosaicking. This paper introduced a new approach - morphological demosaicking. The approach is based on strong edge directionality selection and interpolation, followed by morphological operations to refine edge directionality selection and reduce color aliasing. Finally performance evaluation and examples of color artifacts reduction are shown.

  18. Ocean Surface Wave Optical Roughness: Analysis of Innovative Measurements

    DTIC Science & Technology

    2013-12-16

    relationship of MSS to wind speed, and at times has shown a reversal of the Cox-Munk linear relationship. Furthermore, we observe measurable changes in...1985]. The variable speed allocation method has the effect of aliasing (cb) to slower waves, thereby increasing the exponent –m. Our analysis based ...RaDyO) program. The primary research goals of the program are to (1) examine time -dependent oceanic radiance distribution in relation to dynamic

  19. Perceptual Performance Impact of GPU-Based WARP and Anti-Aliasing for Image Generators

    DTIC Science & Technology

    2016-06-29

    with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator. This 15-channel, 150...ABSTRACT In 2012 the U.S. Air Force School of Aerospace Medicine, in partnership with the Air Force Research Laboratory (AFRL) and NASA AMES...with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator to evaluate the

  20. Sampling and position effects in the Electronically Steered Thinned Array Radiometer (ESTAR)

    NASA Technical Reports Server (NTRS)

    Katzberg, Stephen J.

    1993-01-01

    A simple engineering level model of the Electronically Steered Thinned Array Radiometer (ESTAR) is developed that allows an identification of the major effects of the sampling process involved with this technique. It is shown that the ESTAR approach is sensitive to aliasing and has a highly non-uniform sensitivity profile. It is further shown that the ESTAR approach is strongly sensitive to position displacements of the low-density sampling antenna elements.

Top