Performance of cellular frequency-hopped spread-spectrum radio networks
NASA Astrophysics Data System (ADS)
Gluck, Jeffrey W.; Geraniotis, Evaggelos
1989-10-01
Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.
Automatic Locking of Laser Frequency to an Absorption Peak
NASA Technical Reports Server (NTRS)
Koch, Grady J.
2006-01-01
An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that constantly adjusts the frequency in an effort to drive the error to zero. When the laser frequency deviates from the midpeak value but remains within the locking range, the magnitude and sign of the error signal indicate the amount of detuning and the control circuitry adjusts the frequency by what it estimates to be the negative of this amount in an effort to bring the error to zero.
Analysis on optical heterodyne frequency error of full-field heterodyne interferometer
NASA Astrophysics Data System (ADS)
Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli
2017-06-01
The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.
NASA Astrophysics Data System (ADS)
Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.
2017-12-01
Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.
Limitations of Dower's inverse transform for the study of atrial loops during atrial fibrillation.
Guillem, María S; Climent, Andreu M; Bollmann, Andreas; Husser, Daniela; Millet, José; Castells, Francisco
2009-08-01
Spatial characteristics of atrial fibrillatory waves have been extracted by using a vectorcardiogram (VCG) during atrial fibrillation (AF). However, the VCG is usually not recorded in clinical practice and atrial loops are derived from the 12-lead electrocardiogram (ECG). We evaluated the suitability of the reconstruction of orthogonal leads from the 12-lead ECG for fibrillatory waves in AF. We used the Physikalisch-Technische Bundesanstalt diagnostic ECG database, which contains 15 simultaneously recorded signals (12-lead ECG and three Frank orthogonal leads) of 13 patients during AF. Frank leads were derived from the 12-lead ECG by using Dower's inverse transform. Derived leads were then compared to true Frank leads in terms of the relative error achieved. We calculated the orientation of AF loops of both recorded orthogonal leads and derived leads and measured the difference in estimated orientation. Also, we investigated the relationship of errors in derivation with fibrillatory wave amplitude, frequency, wave residuum, and fit to a plane of the AF loops. Errors in derivation of AF loops were 68 +/- 31% and errors in the estimation of orientation were 35.85 +/- 20.43 degrees . We did not find any correlation among these errors and amplitude, frequency, or other parameters. In conclusion, Dower's inverse transform should not be used for the derivation of orthogonal leads from the 12-lead ECG for the analysis of fibrillatory wave loops in AF. Spatial parameters obtained after this derivation may differ from those obtained from recorded orthogonal leads.
NASA Astrophysics Data System (ADS)
Xiao, Zhili; Tan, Chao; Dong, Feng
2017-08-01
Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.
A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less
Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel
NASA Technical Reports Server (NTRS)
Liu, Chia-Liang; Feher, Kamilo
1991-01-01
The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
Triple-frequency radar retrievals of snowfall properties from the OLYMPEX field campaign
NASA Astrophysics Data System (ADS)
Leinonen, J. S.; Lebsock, M. D.; Sy, O. O.; Tanelli, S.
2017-12-01
Retrieval of snowfall properties with radar is subject to significant errors arising from the uncertainties in the size and structure of snowflakes. Recent modeling and theoretical studies have shown that multi-frequency radars can potentially constrain the microphysical properties and thus reduce the uncertainties in the retrieved snow water content. So far, there have only been limited efforts to leverage the theoretical advances in actual snowfall retrievals. In this study, we have implemented an algorithm that retrieves the snowfall properties from triple-frequency radar data using the radar scattering properties from a combination of snowflake scattering databases, which were derived using numerical scattering methods. Snowflake number concentration, characteristic size and density are derived using a combination of optimal estimation and Kalman smoothing; the snow water content and other bulk properties are then derived from these. The retrieval framework is probabilistic and thus naturally provides error estimates for the retrieved quantities. We tested the retrieval algorithm using data from the APR3 airborne radar flown onboard the NASA DC-8 aircraft during the Olympic Mountain Experiment (OLYMPEX) in late 2015. We demonstrated consistent retrieval of snow properties and smooth transition from single- and dual-frequency retrievals to using all three frequencies simultaneously. The error analysis shows that the retrieval accuracy is improved when additional frequencies are introduced. We also compare the findings to in situ measurements of snow properties as well as measurements by polarimetric ground-based radar.
Modeling the directivity of parametric loudspeaker
NASA Astrophysics Data System (ADS)
Shi, Chuang; Gan, Woon-Seng
2012-09-01
The emerging applications of the parametric loudspeaker, such as 3D audio, demands accurate directivity control at the audible frequency (i.e. the difference frequency). Though the delay-and-sum beamforming has been proven adequate to adjust the steering angles of the parametric loudspeaker, accurate prediction of the mainlobe and sidelobes remains a challenging problem. It is mainly because of the approximations that are used to derive the directivity of the difference frequency from the directivity of the primary frequency, and the mismatches between the theoretical directivity and the measured directivity caused by system errors incurred at different stages of the implementation. In this paper, we propose a directivity model of the parametric loudspeaker. The directivity model consists of two tuning vectors corresponding to the spacing error and the weight error for the primary frequency. The directivity model adopts a modified form of the product directivity principle for the difference frequency to further improve the modeling accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, B.N.
1955-05-12
Charts of the geographical distribution of the annual and seasonal D-values and their standard deviations at altitudes of 4500, 6000, and 7000 feeet over Eurasia are derived, which are used to estimate the frequency of baro system errors.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.
2003-01-01
Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.
Noncommuting observables in quantum detection and estimation theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
Basing decisions and estimates on simultaneous approximate measurements of noncommuting observables in a quantum receiver is shown to be equivalent to measuring commuting projection operators on a larger Hilbert space than that of the receiver itself. The quantum-mechanical Cramer-Rao inequalities derived from right logarithmic derivatives and symmetrized logarithmic derivatives of the density operator are compared, and it is shown that the latter give superior lower bounds on the error variances of individual unbiased estimates of arrival time and carrier frequency of a coherent signal. For a suitably weighted sum of the error variances of simultaneous estimates of these, the former yield the superior lower bound under some conditions.
Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
ERIC Educational Resources Information Center
Jones, Gary; Tamburelli, Marco; Watson, Sarah E.; Gobet, Fernand; Pine, Julian M.
2010-01-01
Purpose: Deficits in phonological working memory and deficits in phonological processing have both been considered potential explanatory factors in specific language impairment (SLI). Manipulations of the lexicality and phonotactic frequency of nonwords enable contrasting predictions to be derived from these hypotheses. Method: Eighteen typically…
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
Performance prediction of a synchronization link for distributed aerospace wireless systems.
Wang, Wen-Qin; Shao, Huaizong
2013-01-01
For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.
NASA Astrophysics Data System (ADS)
Luce, Charles H.; Tonina, Daniele; Applebee, Ralph; DeWeese, Timothy
2017-11-01
Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes and thermal conductivity from temperature time series in streambeds are that the solution assumes that (1) the surface boundary condition is a sine wave or nearly so, and (2) there is no gradient in mean temperature with depth. Although the mathematical posing of the problem in the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we develop a mathematical proof demonstrating the equivalence of the solution as developed based on an arbitrary (Fourier integral) surface temperature forcing when evaluated at a single given frequency versus that derived considering a single frequency from the beginning. The implication is that any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes or gradients in the mean temperature with depth are not actually assumptions, and deviations from them should not cause errors in estimates. Given this clarification, we further explore the potential for using information at multiple frequencies to augment the information derived from time series of temperature.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B
2014-09-29
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable.
Leach, Julia M.; Mancini, Martina; Peterka, Robert J.; Hayes, Tamara L.; Horak, Fay B.
2014-01-01
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2–6 mm (before calibration) to 0.5–2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from −10.5% (before calibration) to −0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable. PMID:25268919
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1991-01-01
A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1990-01-01
A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.
Measurement time and statistics for a noise thermometer with a synthetic-noise reference
NASA Astrophysics Data System (ADS)
White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.
2008-08-01
This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.
Eliminating time dispersion from seismic wave modeling
NASA Astrophysics Data System (ADS)
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
Low-cost FM oscillator for capacitance type of blade tip clearance measurement system
NASA Technical Reports Server (NTRS)
Barranger, John P.
1987-01-01
The frequency-modulated (FM) oscillator described is part of a blade tip clearance measurement system that meets the needs of a wide class of fans, compressors, and turbines. As a result of advancements in the technology of ultra-high-frequency operational amplifiers, the FM oscillator requires only a single low-cost integrated circuit. Its carrier frequency is 42.8 MHz when it is used with an integrated probe and connecting cable assembly consisting of a 0.81 cm diameter engine-mounted capacitance probe and a 61 cm long hermetically sealed coaxial cable. A complete circuit analysis is given, including amplifier negative resistance characteristics. An error analysis of environmentally induced effects is also derived, and an error-correcting technique is proposed. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a rotor blade thickness of 1.2 mm. High-temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum permissible seal temperature. The major source of error is the residual FM oscillator noise, which produces a clearance error of + or - 10 microns at a clearance of 0.5 mm. The oscillator electronics accommodates the high rotor speeds associated with small engines, the signals from which may have frequency components as high as 1 MHz.
Performance Prediction of a Synchronization Link for Distributed Aerospace Wireless Systems
Shao, Huaizong
2013-01-01
For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link. PMID:23970828
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.
Zhang, Xi; Miao, Lingjuan; Shao, Haijun
2016-01-01
If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570
Zhang, Xi; Miao, Lingjuan; Shao, Haijun
2016-05-02
If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.
Marathe, A R; Taylor, D M
2015-08-01
Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2015-08-01
Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
Power Scaling and Frequency Stabilization of an Injection-Locked Laser
2000-05-01
In Chapter 4,1 alter the well -documented theory of locking a laser to a Fabry- Perot by performing the PDH error signal derivation in a new manner...the well -documented modulation transfer scheme to lock the frequency-doubled NPRO to a hyperfine component of an electronic transition in I2. 33 I...generally true at very low noise frequencies, well within the feedback loop bandwidth. However, when G0L(V„) « 1 and thus GCL(vn) « 1, Equation 3.9
Towards a novel look on low-frequency climate reconstructions
NASA Astrophysics Data System (ADS)
Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti
2010-05-01
Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.
Entropy-based derivation of generalized distributions for hydrometeorological frequency analysis
NASA Astrophysics Data System (ADS)
Chen, Lu; Singh, Vijay P.
2018-02-01
Frequency analysis of hydrometeorological and hydrological extremes is needed for the design of hydraulic and civil infrastructure facilities as well as water resources management. A multitude of distributions have been employed for frequency analysis of these extremes. However, no single distribution has been accepted as a global standard. Employing the entropy theory, this study derived five generalized distributions for frequency analysis that used different kinds of information encoded as constraints. These distributions were the generalized gamma (GG), the generalized beta distribution of the second kind (GB2), and the Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B) and Halphen type inverse B distribution (Hal-IB), among which the GG and GB2 distribution were previously derived by Papalexiou and Koutsoyiannis (2012) and the Halphen family was first derived using entropy theory in this paper. The entropy theory allowed to estimate parameters of the distributions in terms of the constraints used for their derivation. The distributions were tested using extreme daily and hourly rainfall data. Results show that the root mean square error (RMSE) values were very small, which indicated that the five generalized distributions fitted the extreme rainfall data well. Among them, according to the Akaike information criterion (AIC) values, generally the GB2 and Halphen family gave a better fit. Therefore, those general distributions are one of the best choices for frequency analysis. The entropy-based derivation led to a new way for frequency analysis of hydrometeorological extremes.
Rieche, Marie; Komenský, Tomás; Husar, Peter
2011-01-01
Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w r in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w r/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including themore » effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report
Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo
2013-01-01
Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451
Chin, S C; Weir-McCall, J R; Yeap, P M; White, R D; Budak, M J; Duncan, G; Oliver, T B; Zealley, I A
2017-10-01
To produce short checklists of specific anatomical review sites for different regions of the body based on the frequency of radiological errors reviewed at radiology discrepancy meetings, thereby creating "evidence-based" review areas for radiology reporting. A single centre discrepancy database was retrospectively reviewed from a 5-year period. All errors were classified by type, modality, body system, and specific anatomical location. Errors were assigned to one of four body regions: chest, abdominopelvic, central nervous system (CNS), and musculoskeletal (MSK). Frequencies of errors in anatomical locations were then analysed. There were 561 errors in 477 examinations; 290 (46%) errors occurred in the abdomen/pelvis, 99 (15.7%) in the chest, 117 (18.5%) in the CNS, and 125 (19.9%) in the MSK system. In each body system, the five most common location were chest: lung bases on computed tomography (CT), apices on radiography, pulmonary vasculature, bones, and mediastinum; abdominopelvic: vasculature, colon, kidneys, liver, and pancreas; CNS: intracranial vasculature, peripheral cerebral grey matter, bone, parafalcine, and the frontotemporal lobes surrounding the Sylvian fissure; and MSK: calvarium, sacrum, pelvis, chest, and spine. The five listed locations accounted for >50% of all perceptual errors suggesting an avenue for focused review at the end of reporting. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Neutron-Star Radius from a Population of Binary Neutron Star Mergers.
Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro
2018-01-19
We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.
On higher order discrete phase-locked loops.
NASA Technical Reports Server (NTRS)
Gill, G. S.; Gupta, S. C.
1972-01-01
An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.
NASA Technical Reports Server (NTRS)
Pei, Jing; Wall, John
2013-01-01
This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
Finite element simulation of light transfer in turbid media under structured illumination
USDA-ARS?s Scientific Manuscript database
Spatial-frequency domain (SFD) imaging technique allows to estimate the optical properties of biological tissues in a wide field of view. The technique is, however, prone to error in measurement because the two crucial assumptions used for deriving the analytical solution to diffusion approximation ...
Causal impulse response for circular sources in viscous media
Kelly, James F.; McGough, Robert J.
2008-01-01
The causal impulse response of the velocity potential for the Stokes wave equation is derived for calculations of transient velocity potential fields generated by circular pistons in viscous media. The causal Green’s function is numerically verified using the material impulse response function approach. The causal, lossy impulse response for a baffled circular piston is then calculated within the near field and the far field regions using expressions previously derived for the fast near field method. Transient velocity potential fields in viscous media are computed with the causal, lossy impulse response and compared to results obtained with the lossless impulse response. The numerical error in the computed velocity potential field is quantitatively analyzed for a range of viscous relaxation times and piston radii. Results show that the largest errors are generated in locations near the piston face and for large relaxation times, and errors are relatively small otherwise. Unlike previous frequency-domain methods that require numerical inverse Fourier transforms for the evaluation of the lossy impulse response, the present approach calculates the lossy impulse response directly in the time domain. The results indicate that this causal impulse response is ideal for time-domain calculations that simultaneously account for diffraction and quadratic frequency-dependent attenuation in viscous media. PMID:18397018
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary
2015-06-30
Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.
NASA Astrophysics Data System (ADS)
Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.
2013-09-01
Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.
Evolutionary Model and Oscillation Frequencies for α Ursae Majoris: A Comparison with Observations
NASA Astrophysics Data System (ADS)
Guenther, D. B.; Demarque, P.; Buzasi, D.; Catanzarite, J.; Laher, R.; Conrow, T.; Kreidl, T.
2000-02-01
Inspired by the observations of low-amplitude oscillations of α Ursae Majoris A by Buzasi et al. using the WIRE satellite, a grid of stellar evolutionary tracks has been constructed to derive physically consistent interior models for the nearby red giant. The pulsation properties of these models were then calculated and compared with the observations. It is found that, by adopting the correct metallicity and for a normal helium abundance, only models in the mass range of 4.0-4.5 Msolar fall within the observational error box for α UMa A. This mass range is compatible, within the uncertainties, with the mass derived from the astrometric mass function. Analysis of the pulsation spectra of the models indicates that the observed α UMa oscillations can be most simply interpreted as radial (i.e., l=0) p-mode oscillations of low radial order n. The lowest frequencies observed by Buzasi et al. are compatible, within the observational errors, with model frequencies of radial orders n=0, 1, and 2 for models in the mass range of 4.0-4.5 Msolar. The higher frequencies observed can also be tentatively interpreted as higher n-valued radial p-modes, if we allow that some n-values are not presently observed. The theoretical l=1, 2, and 3 modes in the observed frequency range are g-modes with a mixed mode character, that is, with p-mode-like characteristics near the surface and g-mode-like characteristics in the interior. The calculated radial p-mode frequencies are nearly equally spaced, separated by 2-3 μHz. The nonradial modes are very densely packed throughout the observed frequency range and, even if excited to significant amplitudes at the surface, are unlikely to be resolved by the present observations.
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
Interference effects in phased beam tracing using exact half-space solutions.
Boucher, Matthew A; Pluymers, Bert; Desmet, Wim
2016-12-01
Geometrical acoustics provides a correct solution to the wave equation for rectangular rooms with rigid boundaries and is an accurate approximation at high frequencies with nearly hard walls. When interference effects are important, phased geometrical acoustics is employed in order to account for phase shifts due to propagation and reflection. Error increases, however, with more absorption, complex impedance values, grazing incidence, smaller volumes and lower frequencies. Replacing the plane wave reflection coefficient with a spherical one reduces the error but results in slower convergence. Frequency-dependent stopping criteria are then applied to avoid calculating higher order reflections for frequencies that have already converged. Exact half-space solutions are used to derive two additional spherical wave reflection coefficients: (i) the Sommerfeld integral, consisting of a plane wave decomposition of a point source and (ii) a line of image sources located at complex coordinates. Phased beam tracing using exact half-space solutions agrees well with the finite element method for rectangular rooms with absorbing boundaries, at low frequencies and for rooms with different aspect ratios. Results are accurate even for long source-to-receiver distances. Finally, the crossover frequency between the plane and spherical wave reflection coefficients is discussed.
Comparison of High-Frequency Solar Irradiance: Ground Measured vs. Satellite-Derived
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lave, Matthew; Weekley, Andrew
2016-11-21
High-frequency solar variability is an important to grid integration studies, but ground measurements are scarce. The high resolution irradiance algorithm (HRIA) has the ability to produce 4-sceond resolution global horizontal irradiance (GHI) samples, at locations across North America. However, the HRIA has not been extensively validated. In this work, we evaluate the HRIA against a database of 10 high-frequency ground-based measurements of irradiance. The evaluation focuses on variability-based metrics. This results in a greater understanding of the errors in the HRIA as well as suggestions for improvement to the HRIA.
a Climatology of Global Precipitation.
NASA Astrophysics Data System (ADS)
Legates, David Russell
A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.
Influence of satellite vibration on radio over IsOWC system
NASA Astrophysics Data System (ADS)
Zong, Kang; Zhu, Jiang
2017-07-01
In this paper, we analyze the influence of satellite vibration on radio over intersatellite optical wireless communication (IsOWC) system with an optical booster amplifier (OBA) and an optical preamplifier. The closed-form expressions of radio frequency (RF) gain, noise figure (NF) and spurious-free dynamic range (SFDR) are derived in the presence of pointing jitter taking consideration of bias error. Numerical results for RF gain, NF and SFDR are given for demonstration. Results indicate that the bias error obviously deteriorates the performance of the radio over IsOWC system.
Temporal and spatial deviation in F2 peak parameters derived from FORMOSAT-3/COSMIC
NASA Astrophysics Data System (ADS)
Kumar, Sanjay; Singh, R. P.; Tan, Eng Leong; Singh, A. K.; Ghodpage, R. N.; Siingh, Devendraa
2016-06-01
The plasma frequency profiles derived from the Constellation of Observing System for Meteorology, Ionosphere and Climate (COSMIC) radio occultation measurements are compared with ground-based ionosonde data during the year 2013. Equatorial and midlatitude five stations located in the Northern and Southern Hemisphere are considered: Jicamarca, Jeju, Darwin, Learmonth, and Juliusruh. The aim is to validate the COSMIC-derived data with ground-based measurements and to estimate the difference in plasma frequency (which represents electron density) and height of F2 layer peak during the daytime/nighttime and during different seasons by comparing the two data sets. Analysis showed that the nighttime data are better correlated than the daytime, and the maximum difference occurs at the equatorial ionospheric anomaly (EIA) station as compared to lower and midlatitude stations during the equinox months. The difference between daytime and nighttime correlations becomes insignificant at midlatitude stations. The statistical analysis of computed errors in foF2 (hmF2) showed Gaussian nature with the most probable error range of ±15% (±10%) at the equatorial and EIA stations, ±9% (±7%) outside the EIA region which reduced to ±8% (±6%) at midlatitude stations. The reduction in error at midlatitudes is attributed to the decrease in latitudinal electron density gradients. Comparing the analyzed data during the three geomagnetic storms and quiet days of the same months, it is observed that the differences are significantly enhanced during storm periods and the magnitude of difference in foF2 increases with the intensity of geomagnetic storm.
García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan
2009-02-01
An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali
2015-08-01
In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
A Dynamic Attitude Measurement System Based on LINS
Li, Hanzhou; Pan, Quan; Wang, Xiaoxu; Zhang, Juanni; Li, Jiang; Jiang, Xiangjun
2014-01-01
A dynamic attitude measurement system (DAMS) is developed based on a laser inertial navigation system (LINS). Three factors of the dynamic attitude measurement error using LINS are analyzed: dynamic error, time synchronization and phase lag. An optimal coning errors compensation algorithm is used to reduce coning errors, and two-axis wobbling verification experiments are presented in the paper. The tests indicate that the attitude accuracy is improved 2-fold by the algorithm. In order to decrease coning errors further, the attitude updating frequency is improved from 200 Hz to 2000 Hz. At the same time, a novel finite impulse response (FIR) filter with three notches is designed to filter the dither frequency of the ring laser gyro (RLG). The comparison tests suggest that the new filter is five times more effective than the old one. The paper indicates that phase-frequency characteristics of FIR filter and first-order holder of navigation computer constitute the main sources of phase lag in LINS. A formula to calculate the LINS attitude phase lag is introduced in the paper. The expressions of dynamic attitude errors induced by phase lag are derived. The paper proposes a novel synchronization mechanism that is able to simultaneously solve the problems of dynamic test synchronization and phase compensation. A single-axis turntable and a laser interferometer are applied to verify the synchronization mechanism. The experiments results show that the theoretically calculated values of phase lag and attitude error induced by phase lag can both match perfectly with testing data. The block diagram of DAMS and physical photos are presented in the paper. The final experiments demonstrate that the real-time attitude measurement accuracy of DAMS can reach up to 20″ (1σ) and the synchronization error is less than 0.2 ms on the condition of three axes wobbling for 10 min. PMID:25177802
Computing Instantaneous Frequency by normalizing Hilbert Transform
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
Computing Instantaneous Frequency by normalizing Hilbert Transform
Huang, Norden E.
2005-05-31
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
NASA Astrophysics Data System (ADS)
Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.
2018-06-01
In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.
NASA Astrophysics Data System (ADS)
Wang, Guochao; Xie, Xuedong; Yan, Shuhua
2010-10-01
Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .
Interference elimination in digital controllers of automation systems of oil and gas complex
NASA Astrophysics Data System (ADS)
Solomentsev, K. Yu; Fugarov, D. D.; Purchina, O. A.; Poluyan, A. Y.; Nesterchuk, V. V.; Petrenkova, S. B.
2018-05-01
The given article considers the problems arising in the process of digital governors development for the systems of automatic control. In the case of interference, and also in case of high frequency of digitization, digital differentiation gives a big error. The problem is that the derivative is calculated as the difference of two close variables. The method of differentiation is offered to reduce this error, when there is a case of averaging the difference quotient of the series of meanings. The structure chart for the implementation of this differentiation method is offered in the case of governors construction.
Differential phase measurements of D-region partial reflections
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Sechrist, C. F., Jr.
1972-01-01
Differential phase partial reflection measurements were used to deduce D region electron density profiles. The phase difference was measured by taking sums and differences of amplitudes received on an array of crossed dipoles. The reflection model used was derived from Fresnel reflection theory. Seven profiles obtained over the period from 13 October 1971 to 5 November 1971 are presented, along with the results from simultaneous measurements of differential absorption. Some possible sources of error and error propagation are discussed. A collision frequency profile was deduced from the electron concentration calculated from differential phase and differential absorption.
Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
GNSS software receiver sampling noise and clock jitter performance and impact analysis
NASA Astrophysics Data System (ADS)
Chen, Jian Yun; Feng, XuZhe; Li, XianBin; Wu, GuangYao
2015-02-01
In the design of a multi-frequency multi-constellation GNSS software defined radio receivers is becoming more and more popular due to its simple architecture, flexible configuration and good coherence in multi-frequency signal processing. It plays an important role in navigation signal processing and signal quality monitoring. In particular, GNSS software defined radio receivers driving the sampling clock of analogue-to-digital converter (ADC) by FPGA implies that a more flexible radio transceiver design is possible. According to the concept of software defined radio (SDR), the ideal is to digitize as close to the antenna as possible. Whereas the carrier frequency of GNSS signal is of the frequency of GHz, converting at this frequency is expensive and consumes more power. Band sampling method is a cheaper, more effective alternative. When using band sampling method, it is possible to sample a RF signal at twice the bandwidth of the signal. Unfortunately, as the other side of the coin, the introduction of SDR concept and band sampling method induce negative influence on the performance of the GNSS receivers. ADC's suffer larger sampling clock jitter generated by FPGA; and low sampling frequency introduces more noise to the receiver. Then the influence of sampling noise cannot be neglected. The paper analyzes the sampling noise, presents its influence on the carrier noise ratio, and derives the ranging error by calculating the synchronization error of the delay locked loop. Simulations aiming at each impact factors of sampling-noise-induced ranging error are performed. Simulation and experiment results show that if the target ranging accuracy is at the level of centimeter, the quantization length should be no less than 8 and the sampling clock jitter should not exceed 30ps.
Metrology for terahertz time-domain spectrometers
NASA Astrophysics Data System (ADS)
Molloy, John F.; Naftaly, Mira
2015-12-01
In recent years the terahertz time-domain spectrometer (THz TDS) [1] has emerged as a key measurement device for spectroscopic investigations in the frequency range of 0.1-5 THz. To date, almost every type of material has been studied using THz TDS, including semiconductors, ceramics, polymers, metal films, liquid crystals, glasses, pharmaceuticals, DNA molecules, proteins, gases, composites, foams, oils, and many others. Measurements with a TDS are made in the time domain; conversion from the time domain data to a frequency spectrum is achieved by applying the Fourier Transform, calculated numerically using the Fast Fourier Transform (FFT) algorithm. As in many other types of spectrometer, THz TDS requires that the sample data be referenced to similarly acquired data with no sample present. Unlike frequency-domain spectrometers which detect light intensity and measure absorption spectra, a TDS records both amplitude and phase information, and therefore yields both the absorption coefficient and the refractive index of the sample material. The analysis of the data from THz TDS relies on the assumptions that: a) the frequency scale is accurate; b) the measurement of THz field amplitude is linear; and c) that the presence of the sample does not affect the performance characteristics of the instrument. The frequency scale of a THz TDS is derived from the displacement of the delay line; via FFT, positioning errors may give rise to frequency errors that are difficult to quantify. The measurement of the field amplitude in a THz TDS is required to be linear with a dynamic range of the order of 10 000. And attention must be given to the sample positioning and handling in order to avoid sample-related errors.
NASA Technical Reports Server (NTRS)
Moore, H. J.; Wu, S. C.
1973-01-01
The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.
The centrifugal force reversal and X-ray bursts
NASA Astrophysics Data System (ADS)
Abramowicz, M. A.; Kluźniak, W.; Lasota, J. P.
2001-08-01
Heyl (2000) made an interesting suggestion that the observed shifts in QPO frequency in type I X-ray bursts could be influenced by the same geometrical effect of strong gravity as the one that causes centrifugal force reversal discovered by Abramowicz & Lasota (1974). However, his main result contains a sign error. Here we derive the correct formula and conclude that constraints on the M(R) relation for neutron stars deduced from the rotational-modulation model of QPO frequency shifts are of no practical interest because the correct formula implies a weak condition R* > 1.3 RS, where RS is the Schwarzschild radius. We also argue against the relevance of the rotational-modulation model to the observed frequency modulations.
Impact of Scattering Model on Disdrometer Derived Attenuation Scaling
NASA Technical Reports Server (NTRS)
Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)
2016-01-01
NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.
Impact of Scattering Model on Disdrometer Derived Attenuation Scaling
NASA Technical Reports Server (NTRS)
Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo
2016-01-01
NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP#5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 gigahertz attenuation from the disdrometer and the 20 gigahertz time-series as well as to directly measure the 40 gigahertz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data. In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer-derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.
A pattern jitter free AFC scheme for mobile satellite systems
NASA Technical Reports Server (NTRS)
Yoshida, Shousei
1993-01-01
This paper describes a scheme for pattern jitter free automatic frequency control (AFC) with a wide frequency acquisition range. In this scheme, equalizing signals fed to the frequency discriminator allow pattern jitter free performance to be achieved for all roll-off factors. In order to define the acquisition range, frequency discrimination characateristics are analyzed on a newly derived frequency domain model. As a result, it is shown that a sufficiently wide acquisition range over a given system symbol rate can be achieved independent of symbol timing errors. Additionally, computer simulation demonstrates that frequency jitter performance improves in proportion to E(sub b)/N(sub 0) because pattern-dependent jitter is suppressed in the discriminator output. These results show significant promise for applciation to mobile satellite systems, which feature relatively low symbol rate transmission with an approximately 0.4-0.7 roll-off factor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de
2015-10-15
This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.
Improved calibration technique for in vivo proton MRS thermometry for brain temperature measurement.
Zhu, M; Bashir, A; Ackerman, J J; Yablonskiy, D A
2008-09-01
The most common MR-based approach to noninvasively measure brain temperature relies on the linear relationship between the (1)H MR resonance frequency of tissue water and the tissue's temperature. Herein we provide the most accurate in vivo assessment existing thus far of such a relationship. It was derived by acquiring in vivo MR spectra from a rat brain using a high field (11.74 Tesla [T]) MRI scanner and a single-voxel MR spectroscopy technique based on a LASER pulse sequence. Data were analyzed using three different methods to estimate the (1)H resonance frequencies of water and the metabolites NAA, Cho, and Cr, which are used as temperature-independent internal (frequency) references. Standard modeling of frequency-domain data as composed of resonances characterized by Lorentzian line shapes gave the tightest resonance-frequency versus temperature correlation. An analysis of the uncertainty in temperature estimation has shown that the major limiting factor is an error in estimating the metabolite frequency. For example, for a metabolite resonance linewidth of 8 Hz, signal sampling rate of 2 Hz and SNR of 5, an accuracy of approximately 0.5 degrees C can be achieved at a magnetic field of 3T. For comparison, in the current study conducted at 11.74T, the temperature estimation error was approximately 0.1 degrees C.
Global Application of TaiWan Ionospheric Model to Single-Frequency GPS Positioning
NASA Astrophysics Data System (ADS)
Macalalad, E.; Tsai, L. C.; Wu, J.
2012-04-01
Ionospheric delay is one the major sources of error in GPS positioning and navigation. This error in both pseudorange and phase ranges vary depending on the location of observation, local time, season, solar cycle and geomagnetic activity. For single-frequency receivers, this delay is usually removed using ionospheric models. Two of them are the Klobuchar, or broadcast, model and the global ionosphere map (GIM) provided by the International GNSS Service (IGS). In this paper, a three dimensional ionospheric electron (ne) density model derived from FormoSat3/COSMIC GPS Radio Occultation measurements, called the TaiWan Ionosphere Model, is used. It was used to calculate the slant total electron content (STEC) between receiver and GPS satellites to correct the pseudorange single-frequency observations. The corrected pseudorange for every epoch was used to determine a more accurate position of the receiver. Observations were made in July 2, 2011(Kp index = 0-2) in five randomly selected sites across the globe, four of which are IGS stations (station ID: cnmr, coso, irkj and morp) while the other is a low-cost single-frequency receiver located in Chungli City, Taiwan (ID: isls). It was illustrated that TEC maps generated using TWIM exhibited a detailed structure of the ionosphere, whereas Klobuchar and GIM only provided the basic diurnal and geographic features of the ionosphere. Also, it was shown that for single-frequency static point positioning TWIM provides more accurate and more precise positioning than the Klobuchar and GIM models for all stations. The average %error of the corrections made by Klobuchar, GIM and TWIM in DRMS are 3.88%, 0.78% and 17.45%, respectively. While the average %error in VRMS for Klobuchar, GIM and TWIM are 53.55%, 62.09%, 66.02%, respectively. This shows the capability of TWIM to provide a good global 3-dimensional ionospheric model.
Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong
2016-01-01
Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.
Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System
NASA Technical Reports Server (NTRS)
Pfenninger, W. Matthew; Papen, George C.
1992-01-01
Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.
Analysis of error type and frequency in apraxia of speech among Portuguese speakers.
Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo
2010-01-01
Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.
Asquith, William H.; Thompson, David B.
2008-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Mesospheric radar wind comparisons at high and middle southern latitudes
NASA Astrophysics Data System (ADS)
Reid, Iain M.; McIntosh, Daniel L.; Murphy, Damian J.; Vincent, Robert A.
2018-05-01
We compare hourly averaged neutral winds derived from two meteor radars operating at 33.2 and 55 MHz to estimate the errors in these measurements. We then compare the meteor radar winds with those from a medium-frequency partial reflection radar operating at 1.94 MHz. These three radars are located at Davis Station, Antarctica. We then consider a middle-latitude 55 MHz meteor radar wind comparison with a 1.98 MHz medium-frequency partial reflection radar to determine how representative the Davis results are. At both sites, the medium-frequency radar winds are clearly underestimated, and the underestimation increases from 80 km to the maximum height of 98 km. Correction factors are suggested for these results.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
NASA Astrophysics Data System (ADS)
Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai
2016-03-01
The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.
2014-01-01
The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
NASA Astrophysics Data System (ADS)
Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven
2017-11-01
State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.
Free space optical ultra-wideband communications over atmospheric turbulence channels.
Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu
2010-08-02
A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.
Throughput and delay analysis of IEEE 802.15.6-based CSMA/CA protocol.
Ullah, Sana; Chen, Min; Kwak, Kyung Sup
2012-12-01
The IEEE 802.15.6 is a new communication standard on Wireless Body Area Network (WBAN) that focuses on a variety of medical, Consumer Electronics (CE) and entertainment applications. In this paper, the throughput and delay performance of the IEEE 802.15.6 is presented. Numerical formulas are derived to determine the maximum throughput and minimum delay limits of the IEEE 802.15.6 for an ideal channel with no transmission errors. These limits are derived for different frequency bands and data rates. Our analysis is validated by extensive simulations using a custom C+ + simulator. Based on analytical and simulation results, useful conclusions are derived for network provisioning and packet size optimization for different applications.
Wildey, R.L.
1988-01-01
A method is derived for determining the dependence of radar backscatter on incidence angle that is applicable to the region corresponding to a particular radar image. The method is based on enforcing mathematical consistency between the frequency distribution of the image's pixel signals (histogram of DN values with suitable normalizations) and a one-dimensional frequency distribution of slope component, as might be obtained from a radar or laser altimetry profile in or near the area imaged. In order to achieve a unique solution, the auxiliary assumption is made that the two-dimensional frequency distribution of slope is isotropic. The backscatter is not derived in absolute units. The method is developed in such a way as to separate the reflectance function from the pixel-signal transfer characteristic. However, these two sources of variation are distinguishable only on the basis of a weak dependence on the azimuthal component of slope; therefore such an approach can be expected to be ill-conditioned unless the revision of the transfer characteristic is limited to the determination of an additive instrumental background level. The altimetry profile does not have to be registered in the image, and the statistical nature of the approach minimizes pixel noise effects and the effects of a disparity between the resolutions of the image and the altimetry profile, except in the wings of the distribution where low-number statistics preclude accuracy anyway. The problem of dealing with unknown slope components perpendicular to the profiling traverse, which besets the one-to-one comparison between individual slope components and pixel-signal values, disappears in the present approach. In order to test the resulting algorithm, an artificial radar image was generated from the digitized topographic map of the Lake Champlain West quadrangle in the Adirondack Mountains, U.S.A., using an arbitrarily selected reflectance function. From the same map, a one-dimensional frequency distribution of slope component was extracted. The algorithm recaptured the original reflectance function to the degree that, for the central 90% of the data, the discrepancy translates to a RMS slope error of 0.1 ???. For the central 99% of the data, the maximum error translates to 1 ???; at the absolute extremes of the data the error grows to 6 ???. ?? 1988 Kluwer Academic Publishers.
NASA Astrophysics Data System (ADS)
Peselnick, L.
1982-08-01
An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.
Analytic calculations of anharmonic infrared and Raman vibrational spectra
Louant, Orian; Ruud, Kenneth
2016-01-01
Using a recently developed recursive scheme for the calculation of high-order geometric derivatives of frequency-dependent molecular properties [Ringholm et al., J. Comp. Chem., 2014, 35, 622], we present the first analytic calculations of anharmonic infrared (IR) and Raman spectra including anharmonicity both in the vibrational frequencies and in the IR and Raman intensities. In the case of anharmonic corrections to the Raman intensities, this involves the calculation of fifth-order energy derivatives—that is, the third-order geometric derivatives of the frequency-dependent polarizability. The approach is applicable to both Hartree–Fock and Kohn–Sham density functional theory. Using generalized vibrational perturbation theory to second order, we have calculated the anharmonic infrared and Raman spectra of the non- and partially deuterated isotopomers of nitromethane, where the inclusion of anharmonic effects introduces combination and overtone bands that are observed in the experimental spectra. For the major features of the spectra, the inclusion of anharmonicities in the calculation of the vibrational frequencies is more important than anharmonic effects in the calculated infrared and Raman intensities. Using methanimine as a trial system, we demonstrate that the analytic approach avoids errors in the calculated spectra that may arise if numerical differentiation schemes are used. PMID:26784673
Experiments on Frequency Dependence of the Deflection of Light in Yang-Mills Gravity
NASA Astrophysics Data System (ADS)
Hao, Yun; Zhu, Yiyi; Hsu, Jong-Ping
2018-01-01
In Yang-Mills gravity based on flat space-time, the eikonal equation for a light ray is derived from the modified Maxwell's wave equations in the geometric-optics limit. One obtains a Hamilton-Jacobi type equation, GLµv∂µΨ∂vΨ = 0 with an effective Riemannian metric tensor GLµv. According to Yang-Mills gravity, light rays (and macroscopic objects) move as if they were in an effective curved space-time with a metric tensor. The deflection angle of a light ray by the sun is about 1.53″ for experiments with optical frequencies ≈ 1014Hz. It is roughly 12% smaller than the usual value 1.75″. However, the experimental data in the past 100 years for the deflection of light by the sun in optical frequencies have uncertainties of (10-20)% due to large systematic errors. If one does not take the geometric-optics limit, one has the equation, GLµv[∂µΨ∂vΨcosΨ+ (∂µ∂vΨ)sinΨ] = 0, which suggests that the deflection angle could be frequency-dependent, according to Yang-Mills gravity. Nowadays, one has very accurate data in the radio frequencies ≈ 109Hz with uncertainties less than 0.1%. Thus, one can test this suggestion by using frequencies ≈ 1012 Hz, which could have a small uncertainty 0.1% due to the absence of systematic errors in the very long baseline interferometry.
Elucidating the mechanisms of paternal non-disjunction of chromosome 21 in humans.
Savage, A R; Petersen, M B; Pettay, D; Taft, L; Allran, K; Freeman, S B; Karadima, G; Avramopoulos, D; Torfs, C; Mikkelsen, M; Hassold, T J; Sherman, S L
1998-08-01
Paternal non-disjunction of chromosome 21 accounts for 5-10% of Down syndrome cases, therefore, relative to the maternally derived cases, little is known about paternally derived trisomy 21. We present the first analysis of recombination and non-disjunction for a large paternally derived population of free trisomy 21 conceptuses ( n = 67). Unlike maternal cases where the ratio of meiosis I (MI) to meiosis II (MII) errors is 3:1, a near 1:1 ratio exists among paternal cases, with a slight excess of MII errors. We found no paternal age effect for the overall population nor when classifying cases according to stage of non-disjunction error. Among 22 MI cases, only five had an observable recombinant event. This differs significantly from the 11 expected events ( P < 0.02, Fisher's exact), suggesting reduced recombination along the non-disjoined chromosomes 21 involved in paternal MI non-disjunction. No difference in recombination was detected among 27 paternal MII cases as compared with controls. However, cases exhibited a slight increase in the frequency of proximal and medial exchange when compared with controls (0.37 versus 0.28, respectively). Lastly, this study confirmed previous reports of excess male probands among paternally derived trisomy 21 cases. However, we report evidence suggesting an MII stage-specific sex ratio disturbance where 2.5 male probands were found for each female proband. Classification of MII cases based on the position of the exchange event suggested that the proband sex ratio disturbance was restricted to non-telomeric exchange cases. Based on these findings, we propose new models to explain the association between paternally derived trisomy 21 and excessive male probands.
Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?
NASA Technical Reports Server (NTRS)
Courter, B. J.; Jex, H. R.
1984-01-01
Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
NASA Astrophysics Data System (ADS)
Sandoz, J.-P.; Steenaart, W.
1984-12-01
The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Calculation Of Pneumatic Attenuation In Pressure Sensors
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.
1991-01-01
Errors caused by attenuation of air-pressure waves in narrow tubes calculated by method based on fundamental equations of flow. Changes in ambient pressure transmitted along narrow tube to sensor. Attenuation of high-frequency components of pressure wave calculated from wave equation derived from Navier-Stokes equations of viscous flow in tube. Developed to understand and compensate for frictional attenuation in narrow tubes used to connect aircraft pressure sensors with pressure taps on affected surfaces.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-01-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086
NASA Astrophysics Data System (ADS)
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-11-18
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-01-01
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581
Correction of Single Frequency Altimeter Measurements for Ionosphere Delay
NASA Technical Reports Server (NTRS)
Schreiner, William S.; Markin, Robert E.; Born, George H.
1997-01-01
This study is a preliminary analysis of the accuracy of various ionosphere models to correct single frequency altimeter height measurements for Ionospheric path delay. In particular, research focused on adjusting empirical and parameterized ionosphere models in the parameterized real-time ionospheric specification model (PRISM) 1.2 using total electron content (TEC) data from the global positioning system (GPS). The types of GPS data used to adjust PRISM included GPS line-of-sight (LOS) TEC data mapped to the vertical, and a grid of GPS derived TEC data in a sun-fixed longitude frame. The adjusted PRISM TEC values, as well as predictions by IRI-90, a climatotogical model, were compared to TOPEX/Poseidon (T/P) TEC measurements from the dual-frequency altimeter for a number of T/P tracks. When adjusted with GPS LOS data, the PRISM empirical model predicted TEC over 24 1 h data sets for a given local time to with in a global error of 8.60 TECU rms during a midnight centered ionosphere and 9.74 TECU rms during a noon centered ionosphere. Using GPS derived sun-fixed TEC data, the PRISM parameterized model predicted TEC within an error of 8.47 TECU rms centered at midnight and 12.83 TECU rms centered at noon. From these best results, it is clear that the proposed requirement of 3-4 TECU global rms for TOPEX/Poseidon Follow-On will be very difficult to meet, even with a substantial increase in the number of GPS ground stations, with any realizable combination of the aforementioned models or data assimilation schemes.
NASA Technical Reports Server (NTRS)
Desai, S. D.; Yuan, D. -N.
2006-01-01
A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.
Improving the surface metrology accuracy of optical profilers by using multiple measurements
NASA Astrophysics Data System (ADS)
Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan
2016-10-01
The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Verbist, Bie; Clement, Lieven; Reumers, Joke; Thys, Kim; Vapirev, Alexander; Talloen, Willem; Wetzels, Yves; Meys, Joris; Aerssens, Jeroen; Bijnens, Luc; Thas, Olivier
2015-02-22
Deep-sequencing allows for an in-depth characterization of sequence variation in complex populations. However, technology associated errors may impede a powerful assessment of low-frequency mutations. Fortunately, base calls are complemented with quality scores which are derived from a quadruplet of intensities, one channel for each nucleotide type for Illumina sequencing. The highest intensity of the four channels determines the base that is called. Mismatch bases can often be corrected by the second best base, i.e. the base with the second highest intensity in the quadruplet. A virus variant model-based clustering method, ViVaMBC, is presented that explores quality scores and second best base calls for identifying and quantifying viral variants. ViVaMBC is optimized to call variants at the codon level (nucleotide triplets) which enables immediate biological interpretation of the variants with respect to their antiviral drug responses. Using mixtures of HCV plasmids we show that our method accurately estimates frequencies down to 0.5%. The estimates are unbiased when average coverages of 25,000 are reached. A comparison with the SNP-callers V-Phaser2, ShoRAH, and LoFreq shows that ViVaMBC has a superb sensitivity and specificity for variants with frequencies above 0.4%. Unlike the competitors, ViVaMBC reports a higher number of false-positive findings with frequencies below 0.4% which might partially originate from picking up artificial variants introduced by errors in the sample and library preparation step. ViVaMBC is the first method to call viral variants directly at the codon level. The strength of the approach lies in modeling the error probabilities based on the quality scores. Although the use of second best base calls appeared very promising in our data exploration phase, their utility was limited. They provided a slight increase in sensitivity, which however does not warrant the additional computational cost of running the offline base caller. Apparently a lot of information is already contained in the quality scores enabling the model based clustering procedure to adjust the majority of the sequencing errors. Overall the sensitivity of ViVaMBC is such that technical constraints like PCR errors start to form the bottleneck for low frequency variant detection.
An improved design method based on polyphase components for digital FIR filters
NASA Astrophysics Data System (ADS)
Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No
2017-11-01
This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Song, E-mail: yuessd@163.com; University of Chinese Academy of Sciences, Beijing 100049; Zhang, Zhao-chuan
In this paper, a sector steps approximation method is proposed to investigate the resonant frequencies of magnetrons with arbitrary side resonators. The arbitrary side resonator is substituted with a series of sector steps, in which the spatial harmonics of electromagnetic field are also considered. By using the method of admittance matching between adjacent steps, as well as field continuity conditions between side resonators and interaction regions, the dispersion equation of magnetron with arbitrary side resonators is derived. Resonant frequencies of magnetrons with five common kinds of side resonators are calculated with sector steps approximation method and computer simulation softwares, inmore » which the results have a good agreement. The relative error is less than 2%, which verifies the validity of sector steps approximation method.« less
Moment-Tensor Spectra of Source Physics Experiments (SPE) Explosions in Granite
NASA Astrophysics Data System (ADS)
Yang, X.; Cleveland, M.
2016-12-01
We perform frequency-domain moment tensor inversions of Source Physics Experiments (SPE) explosions conducted in granite during Phase I of the experiment. We test the sensitivity of source moment-tensor spectra to factors such as the velocity model, selected dataset and smoothing and damping parameters used in the inversion to constrain the error bound of inverted source spectra. Using source moments and corner frequencies measured from inverted source spectra of these explosions, we develop a new explosion P-wave source model that better describes observed source spectra of these small and over-buried chemical explosions detonated in granite than classical explosion source models derived mainly from nuclear-explosion data. In addition to source moment and corner frequency, we analyze other features in the source spectra to investigate their physical causes.
snpAD: An ancient DNA genotype caller.
Prüfer, Kay
2018-06-21
The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
NASA Technical Reports Server (NTRS)
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems
NASA Astrophysics Data System (ADS)
Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen
2017-06-01
In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.
Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions
NASA Astrophysics Data System (ADS)
Van Hooidonk, R. J.
2011-12-01
Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.
Wideband characterization of printed circuit board materials up to 50 ghz
NASA Astrophysics Data System (ADS)
Rakov, Aleksei
A traveling-wave technique developed a few years ago in the Missouri S&T EMC Laboratory has been employed until now for characterization of PCB materials over a broad frequency range up to 30 GHz. This technique includes measuring S-parameters of the specially designed PCB test vehicles. An extension of the frequency range of printed circuit board laminate dielectric and copper foil characterization is an important problem. In this work, a new PCB test vehicle design for operating up to 50 GHz has been proposed. As the frequency range of measurements increases, the analysis of errors and uncertainties in measuring dielectric properties becomes increasingly important. Formulas for quantification of two major groups of errors, repeatability (manufacturing variability) and reproducibility (systematic) errors, in extracting dielectric constant (DK) and dissipation factor (DK) have been derived, and computations for a number of cases are presented. Conductor (copper foil) surface roughness of PCB interconnects is an important factor, which affects accuracy of DK and DF measurements. This work describes a new algorithm for semi-automatic characterization of copper foil profiles on optical or scanning electron microscopy (SEM) pictures of signal traces. The collected statistics of numerous copper foil roughness profiles allows for introducing a new metric for roughness characterization of PCB interconnects. This is an important step to refining the measured DK and DF parameters from roughness contributions. The collected foil profile data and its analysis allow for developing "design curves", which could be used by SI engineers and electronics developers in their designs.
NASA Astrophysics Data System (ADS)
Liu, Yu; He, Chuanbo
2015-12-01
In this discussion, the corrections to the errors found in the derivations and the numerical code of a recent analytical study (Zhou et al. Journal of Sound and Vibration 333 (7) (2014) 1972-1990) on sound transmission through double-walled cylindrical shells lined with poroelastic material are presented and discussed, as well as the further effect of the external mean flow on the transmission loss. After applying the corrections, the locations of the characteristic frequencies of thin shells remain unchanged, as well as the TL results above the ring frequency where BU and UU remain the best configurations in sound insulation performance. In the low-frequency region below the ring frequency, however, the corrections attenuate the TL amplitude significantly for BU and UU, and hence the BB configuration exhibits the best performance which is consistent with previous observations for flat sandwich panels.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
Measurements of the cosmic microwave background temperature at 1.47 GHz
NASA Technical Reports Server (NTRS)
Bensadoun, M.; Bersanelli, M.; De Amici, G.; Kogut, A.; Levin, S. M.; Limon, M.; Smoot, G. F.; Witebsky, C.
1993-01-01
We have used a radio-frequency-gain total-power radiometer to measure the intensity of the cosmic microwave background (CMB) at a frequency of 1.47 GHz (20.4 cm wavelength) from White Mountain, California in 1988 September and from the South Pole in 1989 December. The CMB thermodynamic temperature, T(CMB), is 2.27 +/- 0.25 K (68 percent confidence limit) measured from White Mountain and 2.26 +/- 0.20 K from the South Pole site. The combined result is 2.26 +/- 0.19 K. The correction for Galactic emission has been derived from scaled low-frequency maps and constitutes the main source of error. The atmospheric signal is extrapolated from our zenith scan measurements at higher frequencies. These results are consistent with our previous measurement at 1.41 GHz and about 2.5 sigma from the 2.74 +/- 0.01 K global average CMB temperature.
Satellite Estimation of Daily Land Surface Water Vapor Pressure Deficit from AMSR- E
NASA Astrophysics Data System (ADS)
Jones, L. A.; Kimball, J. S.; McDonald, K. C.; Chan, S. K.; Njoku, E. G.; Oechel, W. C.
2007-12-01
Vapor pressure deficit (VPD) is a key variable for monitoring land surface water and energy exchanges, and estimating plant water stress. Multi-frequency day/night brightness temperatures from the Advanced Microwave Scanning Radiometer on EOS Aqua (AMSR-E) were used to estimate daily minimum and average near surface (2 m) air temperatures across a North American boreal-Arctic transect. A simple method for determining daily mean VPD (Pa) from AMSR-E air temperature retrievals was developed and validated against observations across a regional network of eight study sites ranging from boreal grassland and forest to arctic tundra. The method assumes that the dew point and minimum daily air temperatures tend to equilibrate in areas with low night time temperatures and relatively moist conditions. This assumption was tested by comparing the VPD algorithm results derived from site daily temperature observations against results derived from AMSR-E retrieved temperatures alone. An error analysis was conducted to determine the amount of error introduced in VPD estimates given known levels of error in satellite retrieved temperatures. Results indicate that the assumption generally holds for the high latitude study sites except for arid locations in mid-summer. VPD estimates using the method with AMSR-E retrieved temperatures compare favorably with site observations. The method can be applied to land surface temperature retrievals from any sensor with day and night surface or near-surface thermal measurements and shows potential for inferring near-surface wetness conditions where dense vegetation may hinder surface soil moisture retrievals from low-frequency microwave sensors. This work was carried out at The University of Montana, at San Diego State University, and at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
Covariance analyses of satellite-derived mesoscale wind fields
NASA Technical Reports Server (NTRS)
Maddox, R. A.; Vonder Haar, T. H.
1979-01-01
Statistical structure functions have been computed independently for nine satellite-derived mesoscale wind fields that were obtained on two different days. Small cumulus clouds were tracked at 5 min intervals, but since these clouds occurred primarily in the warm sectors of midlatitude cyclones the results cannot be considered representative of the circulations within cyclones in general. The field structure varied considerably with time and was especially affected if mesoscale features were observed. The wind fields on the 2 days studied were highly anisotropic with large gradients in structure occurring approximately normal to the mean flow. Structure function calculations for the combined set of satellite winds were used to estimate random error present in the fields. It is concluded for these data that the random error in vector winds derived from cumulus cloud tracking using high-frequency satellite data is less than 1.75 m/s. Spatial correlation functions were also computed for the nine data sets. Normalized correlation functions were considerably different for u and v components and decreased rapidly as data point separation increased for both components. The correlation functions for transverse and longitudinal components decreased less rapidly as data point separation increased.
Digital Filtering of Three-Dimensional Lower Extremity Kinematics: an Assessment
Sinclair, Jonathan; Taylor, Paul John; Hobbs, Sarah Jane
2013-01-01
Errors in kinematic data are referred to as noise and are an undesirable portion of any waveform. Noise is typically removed using a low-pass filter which removes the high frequency components of the signal. The selection of an optimal frequency cut-off is very important when processing kinematic information and a number of techniques exists for the determination of an optimal frequency cut-off. Despite the importance of cut-off frequency to the efficacy of kinematic analyses there is currently a paucity of research examining the influence of different cut-off frequencies on the resultant 3-D kinematic waveforms and discrete parameters. Twenty participants ran at 4.0 m•s−1 as lower extremity kinematics in the sagittal, coronal and transverse planes were measured using an eight camera motion analysis system. The data were filtered at a range of cut-off frequencies and the discrete kinematic parameters were examined using repeated measures ANOVA’s. The similarity between the raw and filtered waveforms were examined using intra-class correlations. The results show that the cut-off frequency has a significant influence on the discrete kinematic measure across displacement and derivative information in all three planes of rotation. Furthermore, it was also revealed that as the cut-off frequency decreased the attenuation of the kinematic waveforms became more pronounced, particularly in the coronal and transverse planes at the second derivative. In conclusion, this investigation provides new information regarding the influence of digital filtering on lower extremity kinematics and re-emphasizes the importance of selecting the correct cut-off frequency. PMID:24511338
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
Effect of phase errors in stepped-frequency radar systems
NASA Astrophysics Data System (ADS)
Vanbrundt, H. E.
1988-04-01
Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Daniel C.; Bowman, Judd; Parsons, Aaron R.
We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources atmore » similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.« less
Dynamic gas temperature measurement system
NASA Technical Reports Server (NTRS)
Elmore, D. L.; Robinson, W. W.; Watkins, W. B.
1983-01-01
A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.
NASA Astrophysics Data System (ADS)
Klein, P.; Hirth, M.; Gröber, S.; Kuhn, J.; Müller, A.
2014-07-01
Smartphones and tablets are used as experimental tools and for quantitative measurements in two traditional laboratory experiments for undergraduate physics courses. The Doppler effect is analyzed and the speed of sound is determined with an accuracy of about 5% using ultrasonic frequency and two smartphones, which serve as rotating sound emitter and stationary sound detector. Emphasis is put on the investigation of measurement errors in order to judge experimentally derived results and to sensitize undergraduate students to the methods of error estimates. The distance dependence of the illuminance of a light bulb is investigated using an ambient light sensor of a mobile device. Satisfactory results indicate that the spectrum of possible smartphone experiments goes well beyond those already published for mechanics.
Planck 2015 results. X. Diffuse component separation: Foreground maps
NASA Astrophysics Data System (ADS)
Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Orlando, E.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps and the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.´5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100-353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, R.; Ade, P. A. R.; Aghanim, N.
We report that Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps andmore » the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.« less
Planck 2015 results: X. Diffuse component separation: Foreground maps
Adam, R.; Ade, P. A. R.; Aghanim, N.; ...
2016-09-20
We report that Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps andmore » the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.« less
Headaches associated with refractive errors: myth or reality?
Gil-Gouveia, R; Martins, I P
2002-04-01
Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
Direct measurement of the poliovirus RNA polymerase error frequency in vitro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.
1988-02-01
The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. Amore » fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.« less
ERIC Educational Resources Information Center
Hallin, Anna Eva; Reuterskiöld, Christina
2017-01-01
Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…
Automatic oscillator frequency control system
NASA Technical Reports Server (NTRS)
Smith, S. F. (Inventor)
1985-01-01
A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.
Radar error statistics for the space shuttle
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.
Factors associated with reporting of medication errors by Israeli nurses.
Kagan, Ilya; Barnoy, Sivia
2008-01-01
This study investigated medication error reporting among Israeli nurses, the relationship between nurses' personal views about error reporting, and the impact of the safety culture of the ward and hospital on this reporting. Nurses (n = 201) completed a questionnaire related to different aspects of error reporting (frequency, organizational norms of dealing with errors, and personal views on reporting). The higher the error frequency, the more errors went unreported. If the ward nurse manager corrected errors on the ward, error self-reporting decreased significantly. Ward nurse managers have to provide good role models.
Analysis of frequency mixing error on heterodyne interferometric ellipsometry
NASA Astrophysics Data System (ADS)
Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan
2007-11-01
A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.
Modeling and evaluating the performance of Brillouin distributed optical fiber sensors.
Soto, Marcelo A; Thévenaz, Luc
2013-12-16
A thorough analysis of the key factors impacting on the performance of Brillouin distributed optical fiber sensors is presented. An analytical expression is derived to estimate the error on the determination of the Brillouin peak gain frequency, based for the first time on real experimental conditions. This expression is experimentally validated, and describes how this frequency uncertainty depends on measurement parameters, such as Brillouin gain linewidth, frequency scanning step and signal-to-noise ratio. Based on the model leading to this expression and considering the limitations imposed by nonlinear effects and pump depletion, a figure-of-merit is proposed to fairly compare the performance of Brillouin distributed sensing systems. This figure-of-merit offers to the research community and to potential users the possibility to evaluate with an objective metric the real performance gain resulting from any proposed configuration.
NASA Astrophysics Data System (ADS)
Dai, Liyun; Che, Tao; Ding, Yongjian; Hao, Xiaohua
2017-08-01
Snow cover on the Qinghai-Tibetan Plateau (QTP) plays a significant role in the global climate system and is an important water resource for rivers in the high-elevation region of Asia. At present, passive microwave (PMW) remote sensing data are the only efficient way to monitor temporal and spatial variations in snow depth at large scale. However, existing snow depth products show the largest uncertainties across the QTP. In this study, MODIS fractional snow cover product, point, line and intense sampling data are synthesized to evaluate the accuracy of snow cover and snow depth derived from PMW remote sensing data and to analyze the possible causes of uncertainties. The results show that the accuracy of snow cover extents varies spatially and depends on the fraction of snow cover. Based on the assumption that grids with MODIS snow cover fraction > 10 % are regarded as snow cover, the overall accuracy in snow cover is 66.7 %, overestimation error is 56.1 %, underestimation error is 21.1 %, commission error is 27.6 % and omission error is 47.4 %. The commission and overestimation errors of snow cover primarily occur in the northwest and southeast areas with low ground temperature. Omission error primarily occurs in cold desert areas with shallow snow, and underestimation error mainly occurs in glacier and lake areas. With the increase of snow cover fraction, the overestimation error decreases and the omission error increases. A comparison between snow depths measured in field experiments, measured at meteorological stations and estimated across the QTP shows that agreement between observation and retrieval improves with an increasing number of observation points in a PMW grid. The misclassification and errors between observed and retrieved snow depth are associated with the relatively coarse resolution of PMW remote sensing, ground temperature, snow characteristics and topography. To accurately understand the variation in snow depth across the QTP, new algorithms should be developed to retrieve snow depth with higher spatial resolution and should consider the variation in brightness temperatures at different frequencies emitted from ground with changing ground features.
NASA Astrophysics Data System (ADS)
Akbarashrafi, F.; Al-Attar, D.; Deuss, A.; Trampert, J.; Valentine, A. P.
2018-04-01
Seismic free oscillations, or normal modes, provide a convenient tool to calculate low-frequency seismograms in heterogeneous Earth models. A procedure called `full mode coupling' allows the seismic response of the Earth to be computed. However, in order to be theoretically exact, such calculations must involve an infinite set of modes. In practice, only a finite subset of modes can be used, introducing an error into the seismograms. By systematically increasing the number of modes beyond the highest frequency of interest in the seismograms, we investigate the convergence of full-coupling calculations. As a rule-of-thumb, it is necessary to couple modes 1-2 mHz above the highest frequency of interest, although results depend upon the details of the Earth model. This is significantly higher than has previously been assumed. Observations of free oscillations also provide important constraints on the heterogeneous structure of the Earth. Historically, this inference problem has been addressed by the measurement and interpretation of splitting functions. These can be seen as secondary data extracted from low frequency seismograms. The measurement step necessitates the calculation of synthetic seismograms, but current implementations rely on approximations referred to as self- or group-coupling and do not use fully accurate seismograms. We therefore also investigate whether a systematic error might be present in currently published splitting functions. We find no evidence for any systematic bias, but published uncertainties must be doubled to properly account for the errors due to theoretical omissions and regularization in the measurement process. Correspondingly, uncertainties in results derived from splitting functions must also be increased. As is well known, density has only a weak signal in low-frequency seismograms. Our results suggest this signal is of similar scale to the true uncertainties associated with currently published splitting functions. Thus, it seems that great care must be taken in any attempt to robustly infer details of Earth's density structure using current splitting functions.
Reduction of low frequency error for SED36 and APS based HYDRA star trackers
NASA Astrophysics Data System (ADS)
Ouaknine, Julien; Blarre, Ludovic; Oddos-Marcel, Lionel; Montel, Johan; Julio, Jean-Marc
2017-11-01
In the frame of the CNES Pleiades satellite, a reduction of the star tracker low frequency error, which is the most penalizing error for the satellite attitude control, was performed. For that purpose, the SED36 star tracker was developed, with a design based on the flight qualified SED16/26. In this paper, the SED36 main features will be first presented. Then, the reduction process of the low frequency error will be developed, particularly the optimization of the optical distortion calibration. The result is an attitude low frequency error of 1.1" at 3 sigma along transverse axes. The implementation of these improvements to HYDRA, the new multi-head APS star tracker developed by SODERN, will finally be presented.
2012-12-01
acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1
Precision saturated absorption spectroscopy of H3+
NASA Astrophysics Data System (ADS)
Guan, Yu-Chan; Chang, Yung-Hsiang; Liao, Yi-Chieh; Peng, Jin-Long; Wang, Li-Bang; Shy, Jow-Tsong
2018-03-01
In our previous work on the Lamb-dips of the ν2 fundamental band transitions of H3+, the saturated absorption spectrum was obtained by third-derivative spectroscopy using frequency modulation with an optical parametric oscillator (OPO). However, frequency modulation also caused errors in the absolute frequency determination. To solve this problem, we built a tunable offset locking system to lock the pump frequency of the OPO to an iodine-stabilized Nd:YAG laser. With this improvement, we were able to scan the OPO idler frequency precisely and obtain the saturated absorption profile using intensity modulation. Furthermore, ion concentration modulation was employed to subtract the background noise and increase the signal-to-noise ratio. To determine the absolute frequency of the idler wave, the OPO signal frequency was locked to an optical frequency comb. The absolute frequency accuracy of our spectrometer was better than 7 kHz, demonstrated by measuring the wavelength standard transition of methane at 3.39 μm. Finally, we measured 16 transitions of H3+ and our results agree very well with other precision measurements. This work successfully resolved the discrepancies between our previous measurements and other precision measurements.
Jared, Debra; O'Donnell, Katrina
2017-02-01
We examined whether highly skilled adult readers activate the meanings of high-frequency words using phonology when reading sentences for meaning. A homophone-error paradigm was used. Sentences were written to fit 1 member of a homophone pair, and then 2 other versions were created in which the homophone was replaced by its mate or a spelling-control word. The error words were all high-frequency words, and the correct homophones were either higher-frequency words or low-frequency words-that is, the homophone errors were either the subordinate or dominant member of the pair. Participants read sentences as their eye movements were tracked. When the high-frequency homophone error words were the subordinate member of the homophone pair, participants had shorter immediate eye-fixation latencies on these words than on matched spelling-control words. In contrast, when the high-frequency homophone error words were the dominant member of the homophone pair, a difference between these words and spelling controls was delayed. These findings provide clear evidence that the meanings of high-frequency words are activated by phonological representations when skilled readers read sentences for meaning. Explanations of the differing patterns of results depending on homophone dominance are discussed.
Reliable absolute analog code retrieval approach for 3D measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun
2017-11-01
The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.
More sound of church bells: Authors' correction
NASA Astrophysics Data System (ADS)
Vogt, Patrik; Kasper, Lutz; Burde, Jan-Philipp
2016-01-01
In the recently published article "The Sound of Church Bells: Tracking Down the Secret of a Traditional Arts and Crafts Trade," the bell frequencies have been erroneously oversimplified. The problem affects Eqs. (2) and (3), which were derived from the elementary "coffee mug model" and in which we used the speed of sound in air. However, this does not make sense from a physical point of view, since air only acts as a sound carrier, not as a sound source in the case of bells. Due to the excellent fit of the theoretical model with the empirical data, we unfortunately failed to notice this error before publication. However, all other equations, e.g., the introduction of the correction factor in Eq. (4) and the estimation of the mass in Eqs. (5) and (6) are not affected by this error, since they represent empirical models. However, it is unfortunate to introduce the speed of sound in air as a constant in Eqs. (4) and (6). Instead, we suggest the following simple rule of thumb for relating the radius of a church bell R to its humming frequency fhum:
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions
NASA Astrophysics Data System (ADS)
van Hooidonk, R.; Huber, M.
2012-03-01
Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.
Wells, Jered R.; Dobbins, James T.
2012-01-01
Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm−1) and approximate circular symmetry at frequencies below 4 mm−1. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm−1. Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm−1) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation. PMID:23039654
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, Jered R.; Dobbins, James T. III; Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705
2012-10-15
Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1Dmore » test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm{sup -1}) and approximate circular symmetry at frequencies below 4 mm{sup -1}. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm{sup -1}. Slit measurement near 45 Degree-Sign revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm Multiplication-Sign 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm{sup -1}) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.« less
Wells, Jered R; Dobbins, James T
2012-10-01
The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ∕i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm(-1)) and approximate circular symmetry at frequencies below 4 mm(-1). While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm(-1). Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm(-1)) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2003-01-01
The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.
Theoretical Modelling of Sound Radiation from Plate
NASA Astrophysics Data System (ADS)
Zaman, I.; Rozlan, S. A. M.; Yusoff, A.; Madlan, M. A.; Chan, S. W.
2017-01-01
Recently the development of aerospace, automotive and building industries demands the use of lightweight materials such as thin plates. However, the plates can possibly add to significant vibration and sound radiation, which eventually lead to increased noise in the community. So, in this study, the fundamental concept of sound pressure radiated from a simply-supported thin plate (SSP) was analyzed using the derivation of mathematical equations and numerical simulation of ANSYS®. The solution to mathematical equations of sound radiated from a SSP was visualized using MATLAB®. The responses of sound pressure level were measured at far field as well as near field in the frequency range of 0-200 Hz. Result shows that there are four resonance frequencies; 12 Hz, 60 Hz, 106 Hz and 158 Hz were identified which represented by the total number of the peaks in the frequency response function graph. The outcome also indicates that the mathematical derivation correlated well with the simulation model of ANSYS® in which the error found is less than 10%. It can be concluded that the obtained model is reliable and can be applied for further analysis such as to reduce noise emitted from a vibrating thin plate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J; Qi, H; Wu, S
Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less
Frequency of pediatric medication administration errors and contributing factors.
Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda
2011-01-01
This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.
Improved EEG Event Classification Using Differential Energy.
Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J
2015-12-01
Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.
1988-01-01
Presented is a mathematical model derived from the Navier-Stokes equations of momentum and continuity, which may be accurately used to predict the behavior of conventionally mounted pneumatic sensing systems subject to arbitrary pressure inputs. Numerical techniques for solving the general model are developed. Both step and frequency response lab tests were performed. These data are compared with solutions of the mathematical model and show excellent agreement. The procedures used to obtain the lab data are described. In-flight step and frequency response data were obtained. Comparisons with numerical solutions of the math model show good agreement. Procedures used to obtain the flight data are described. Difficulties encountered with obtaining the flight data are discussed.
Data processing and error analysis for the CE-1 Lunar microwave radiometer
NASA Astrophysics Data System (ADS)
Feng, Jian-Qing; Su, Yan; Liu, Jian-Jun; Zou, Yong-Liao; Li, Chun-Lai
2013-03-01
The microwave radiometer (MRM) onboard the Chang' E-1 (CE-1) lunar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from which the thickness, temperature, dielectric constant and other related properties of the lunar regolith can be derived. The working mode of the CE-1 MRM, the ground calibration (including the official calibration coefficients), as well as the acquisition and processing of the raw data are introduced. Our data analysis shows that TB increases with increasing frequency, decreases towards the lunar poles and is significantly affected by solar illumination. Our analysis also reveals that the main uncertainty in TB comes from ground calibration.
Enemark, John H
2017-10-10
Sulfite-oxidizing enzymes from eukaryotes and prokaryotes have five-coordinate distorted square-pyramidal coordination about the molybdenum atom. The paramagnetic Mo(v) state is easily generated, and over the years four distinct CW EPR spectra have been identified, depending upon enzyme source and the reaction conditions, namely high and low pH (hpH and lpH), phosphate inhibited (P i ) and sulfite (or blocked). Extensive studies of these paramagnetic forms of sulfite-oxidizing enzymes using variable frequency pulsed electron spin echo (ESE) spectroscopy, isotopic labeling and density functional theory (DFT) calculations have led to the consensus structures that are described here. Errors in some of the previously proposed structures are corrected.
Highly compact fiber Fabry-Perot interferometer: A new instrument design
NASA Astrophysics Data System (ADS)
Nowakowski, B. K.; Smith, D. T.; Smith, S. T.
2016-11-01
This paper presents the design, construction, and characterization of a new optical-fiber-based, low-finesse Fabry-Perot interferometer with a simple cavity formed by two reflecting surfaces (the end of a cleaved optical fiber and a plane, reflecting counter-surface), for the continuous measurement of displacements of several nanometers to several tens of millimeters. No beam collimation or focusing optics are required, resulting in a displacement sensor that is extremely compact (optical fiber diameter 125 μm), is surprisingly tolerant of misalignment (more than 5°), and can be used over a very wide range of temperatures and environmental conditions, including ultra-high-vacuum. The displacement measurement is derived from interferometric phase measurements using an infrared laser source whose wavelength is modulated sinusoidally at a frequency f. The phase signal is in turn derived from changes in the amplitudes of demodulated signals, at both the modulation frequency, f, and its harmonic at 2f, coming from a photodetector that is monitoring light intensity reflected back from the cavity as the cavity length changes. Simple quadrature detection results in phase errors corresponding to displacement errors of up to 25 nm, but by using compensation algorithms discussed in this paper, these inherent non-linearities can be reduced to below 3 nm. In addition, wavelength sweep capability enables measurement of the absolute surface separation. This experimental design creates a unique set of displacement measuring capabilities not previously combined in a single interferometer.
Tiyip, Tashpolat; Ding, Jianli; Zhang, Dong; Liu, Wei; Wang, Fei; Tashpolat, Nigara
2017-01-01
Effective pretreatment of spectral reflectance is vital to model accuracy in soil parameter estimation. However, the classic integer derivative has some disadvantages, including spectral information loss and the introduction of high-frequency noise. In this paper, the fractional order derivative algorithm was applied to the pretreatment and partial least squares regression (PLSR) was used to assess the clay content of desert soils. Overall, 103 soil samples were collected from the Ebinur Lake basin in the Xinjiang Uighur Autonomous Region of China, and used as data sets for calibration and validation. Following laboratory measurements of spectral reflectance and clay content, the raw spectral reflectance and absorbance data were treated using the fractional derivative order from the 0.0 to the 2.0 order (order interval: 0.2). The ratio of performance to deviation (RPD), determinant coefficients of calibration (Rc2), root mean square errors of calibration (RMSEC), determinant coefficients of prediction (Rp2), and root mean square errors of prediction (RMSEP) were applied to assess the performance of predicting models. The results showed that models built on the fractional derivative order performed better than when using the classic integer derivative. Comparison of the predictive effects of 22 models for estimating clay content, calibrated by PLSR, showed that those models based on the fractional derivative 1.8 order of spectral reflectance (Rc2 = 0.907, RMSEC = 0.425%, Rp2 = 0.916, RMSEP = 0.364%, and RPD = 2.484 ≥ 2.000) and absorbance (Rc2 = 0.888, RMSEC = 0.446%, Rp2 = 0.918, RMSEP = 0.383% and RPD = 2.511 ≥ 2.000) were most effective. Furthermore, they performed well in quantitative estimations of the clay content of soils in the study area. PMID:28934274
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan
2008-01-01
This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.
Application of the Hartmann–Tran profile to precise experimental data sets of 12C 2H 2
Forthomme, D.; Cich, M. J.; Twagirayezu, S.; ...
2015-06-25
Self- and nitrogen-broadened line shape data for the P e(11) line of the ν₁ + ν₃ band of acetylene, recorded using a frequency comb-stabilized laser spectrometer, have been analyzed using the Hartmann–Tran profile (HTP) line shape model in a multispectrum fitting. In total, the data included measurements recorded at temperatures between 125 K and 296 K and at pressures between 4 and 760 Torr. New, sub-Doppler, frequency comb-referenced measurements of the positions of multiple underlying hot band lines have also been made. These underlying lines significantly affect the P e(11) line profile at temperatures above 240 K and poorly knownmore » frequencies previously introduced errors into the line shape analyses. Thus, the behavior of the HTP model was compared to the quadratic speed dependent Voigt profile (QSDVP) expressed in the frequency and time domains. A parameter uncertainty analysis was carried out using a Monte Carlo method based on the estimated pressure, transmittance and frequency measurement errors. From the analyses, the P e(11) line strength was estimated to be 1.2014(50) × 10 -20 in cm.molecules⁻¹ units at 296 K with the standard deviation in parenthesis. For analyzing these data, we found that a reduced form of the HTP, equivalent to the QSDVP, was most appropriate because the additional parameters included in the full HTP were not well determined. As a supplement to this work, expressions for analytic derivatives and a lineshape fitting code written in Matlab for the HTP are available.« less
Application of the Hartmann-Tran profile to precise experimental data sets of 12C2H2
NASA Astrophysics Data System (ADS)
Forthomme, D.; Cich, M. J.; Twagirayezu, S.; Hall, G. E.; Sears, T. J.
2015-11-01
Self- and nitrogen-broadened line shape data for the Pe(11) line of the ν1 +ν3 band of acetylene, recorded using a frequency comb-stabilized laser spectrometer, have been analyzed using the Hartmann-Tran profile (HTP) line shape model in a multispectrum fitting. In total, the data included measurements recorded at temperatures between 125 K and 296 K and at pressures between 4 and 760 Torr. New, sub-Doppler, frequency comb-referenced measurements of the positions of multiple underlying hot band lines have also been made. These underlying lines significantly affect the Pe(11) line profile at temperatures above 240 K and poorly known frequencies previously introduced errors into the line shape analyses. The behavior of the HTP model was compared to the quadratic speed dependent Voigt profile (QSDVP) expressed in the frequency and time domains. A parameter uncertainty analysis was carried out using a Monte Carlo method based on the estimated pressure, transmittance and frequency measurement errors. From the analyses, the Pe(11) line strength was estimated to be 1.2014(50) ×10-20 in cmmolecule-1 units at 296 K with the standard deviation in parenthesis. For analyzing these data, we found that a reduced form of the HTP, equivalent to the QSDVP, was most appropriate because the additional parameters included in the full HTP were not well determined. As a supplement to this work, expressions for analytic derivatives and a lineshape fitting code written in Matlab for the HTP are available.
cBathy: A robust algorithm for estimating nearshore bathymetry
Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd
2013-01-01
A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.
NASA Astrophysics Data System (ADS)
Chen, S.; Chen, H.; Hu, J.; Zhang, A.; Min, C.
2017-12-01
It is more than 3 years since the launch of Global Precipitation Measurement (GPM) core satellite on February 27 2014. This satellite carries two core sensors, i.e. dual-frequency precipitation radar (DPR) and microwave imager (GMI). These two sensors are of the state-of- the-art sensors that observe the precipitation over the globe. The DPR level-2 product provides both precipitation rates and phases. The precipitation phase information can help advance global hydrological cycle modeling, particularly crucial for high-altitude and high latitude regions where solid precipitation is the dominated source of water. However, people are still in short of the reliability and accuracy of DPR level-2 product. Assess the performance and uncertainty of precipitation retrievals derived from the core sensor dual-frequency precipitation radar (DPR) on board the satellite is needed for the precipitation algorithm developers and the end users in hydrology, weather, meteorology, and hydro-related communities. In this study, the precipitation estimation derived from DPR is compared with that derived from CSU-CHILL National Weather Radar from March 2014 to October 2017. The CSU-CHILL radar is located in Greeley, CO, and is an advanced, transportable dual-polarized dual-wavelength (S- and X-band) weather radar. The system and random errors of DPR in measuring precipitation will be analyzed as a function of the precipitation rate and precipitation type (liquid and solid). This study is expected to offer insights into performance of the most advanced sensor and thus provide useful feedback to the algorithm developers as well as the GPM data end users.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
NASA Astrophysics Data System (ADS)
Wang, Haijiang; Yang, Ling
2014-12-01
In this paper, the application of vector analysis tool in the illuminated area and the Doppler frequency distribution research for the airborne pulse radar is studied. An important feature of vector analysis is that it can closely combine the geometric ideas with algebraic calculations. Through coordinate transform, the relationship between the frame of radar antenna and the ground, under aircraft motion attitude, is derived. Under the time-space analysis, the overlap area between the footprint of radar beam and the pulse-illuminated zone is obtained. Furthermore, the Doppler frequency expression is successfully deduced. In addition, the Doppler frequency distribution is plotted finally. Using the time-space analysis results, some important parameters of a specified airborne radar system are obtained. Simultaneously, the results are applied to correct the phase error brought by attitude change in airborne synthetic aperture radar (SAR) imaging.
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
Analysis of Wind Tunnel Lateral Oscillatory Data of the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.; Szyba, Nathan M.
2004-01-01
Static and dynamic wind tunnel tests were performed on an 18% scale model of the F-16XL aircraft. These tests were performed over a wide range of angles of attack and sideslip with oscillation amplitudes from 5 deg. to 30 deg. and reduced frequencies from 0.073 to 0.269. Harmonic analysis was used to estimate Fourier coefficients and in-phase and out-of-phase components. For frequency dependent data from rolling oscillations, a two-step regression method was used to obtain unsteady models (indicial functions), and derivatives due to sideslip angle, roll rate and yaw rate from in-phase and out-of-phase components. Frequency dependence was found for angles of attack between 20 deg. and 50 deg. Reduced values of coefficient of determination and increased values of fit error were found for angles of attack between 35 deg. and 45 deg. An attempt to estimate model parameters from yaw oscillations failed, probably due to the low number of test cases at different frequencies.
NASA Astrophysics Data System (ADS)
Li, Peng; Zhu, Zheng H.; Meguid, S. A.
2016-07-01
This paper studies the pulse-width pulse-frequency modulation based trajectory planning for orbital rendezvous and proximity maneuvering near a non-cooperative spacecraft in an elliptical orbit. The problem is formulated by converting the continuous control input, output from the state dependent model predictive control, into a sequence of pulses of constant magnitude by controlling firing frequency and duration of constant-magnitude thrusters. The state dependent model predictive control is derived by minimizing the control error of states and control roughness of control input for a safe, smooth and fuel efficient approaching trajectory. The resulting nonlinear programming problem is converted into a series of quadratic programming problem and solved by numerical iteration using the receding horizon strategy. The numerical results show that the proposed state dependent model predictive control with the pulse-width pulse-frequency modulation is able to effectively generate optimized trajectories using equivalent control pulses for the proximity maneuvering with less energy consumption.
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Luo, Chengwei; Tsementzi, Despina; Kyrpides, Nikos; Read, Timothy; Konstantinidis, Konstantinos T
2012-01-01
Next-generation sequencing (NGS) is commonly used in metagenomic studies of complex microbial communities but whether or not different NGS platforms recover the same diversity from a sample and their assembled sequences are of comparable quality remain unclear. We compared the two most frequently used platforms, the Roche 454 FLX Titanium and the Illumina Genome Analyzer (GA) II, on the same DNA sample obtained from a complex freshwater planktonic community. Despite the substantial differences in read length and sequencing protocols, the platforms provided a comparable view of the community sampled. For instance, derived assemblies overlapped in ~90% of their total sequences and in situ abundances of genes and genotypes (estimated based on sequence coverage) correlated highly between the two platforms (R(2)>0.9). Evaluation of base-call error, frameshift frequency, and contig length suggested that Illumina offered equivalent, if not better, assemblies than Roche 454. The results from metagenomic samples were further validated against DNA samples of eighteen isolate genomes, which showed a range of genome sizes and G+C% content. We also provide quantitative estimates of the errors in gene and contig sequences assembled from datasets characterized by different levels of complexity and G+C% content. For instance, we noted that homopolymer-associated, single-base errors affected ~1% of the protein sequences recovered in Illumina contigs of 10× coverage and 50% G+C; this frequency increased to ~3% when non-homopolymer errors were also considered. Collectively, our results should serve as a useful practical guide for choosing proper sampling strategies and data possessing protocols for future metagenomic studies.
The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors
NASA Technical Reports Server (NTRS)
Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan
1993-01-01
Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.
Lee, Benjamin C; Moody, Jonathan B; Poitrasson-Rivière, Alexis; Melvin, Amanda C; Weinberg, Richard L; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L
2018-03-23
Patient motion can lead to misalignment of left ventricular volumes of interest and subsequently inaccurate quantification of myocardial blood flow (MBF) and flow reserve (MFR) from dynamic PET myocardial perfusion images. We aimed to identify the prevalence of patient motion in both blood and tissue phases and analyze the effects of this motion on MBF and MFR estimates. We selected 225 consecutive patients that underwent dynamic stress/rest rubidium-82 chloride ( 82 Rb) PET imaging. Dynamic image series were iteratively reconstructed with 5- to 10-second frame durations over the first 2 minutes for the blood phase and 10 to 80 seconds for the tissue phase. Motion shifts were assessed by 3 physician readers from the dynamic series and analyzed for frequency, magnitude, time, and direction of motion. The effects of this motion isolated in time, direction, and magnitude on global and regional MBF and MFR estimates were evaluated. Flow estimates derived from the motion corrected images were used as the error references. Mild to moderate motion (5-15 mm) was most prominent in the blood phase in 63% and 44% of the stress and rest studies, respectively. This motion was observed with frequencies of 75% in the septal and inferior directions for stress and 44% in the septal direction for rest. Images with blood phase isolated motion had mean global MBF and MFR errors of 2%-5%. Isolating blood phase motion in the inferior direction resulted in mean MBF and MFR errors of 29%-44% in the RCA territory. Flow errors due to tissue phase isolated motion were within 1%. Patient motion was most prevalent in the blood phase and MBF and MFR errors increased most substantially with motion in the inferior direction. Motion correction focused on these motions is needed to reduce MBF and MFR errors.
Error analysis for relay type satellite-aided search and rescue systems
NASA Technical Reports Server (NTRS)
Marini, J. W.
1977-01-01
An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.
Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin
2013-07-01
Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Diffraction analysis of sidelobe characteristics of optical elements with ripple error
NASA Astrophysics Data System (ADS)
Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie
2018-03-01
The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason Erik
A frequency-dependent model for levels and decay rates of reverberant energy in systems of coupled rooms is developed and compared with measurements conducted in a 1:10 scale model and in Bass Hall, Fort Worth, TX. Schroeder frequencies of subrooms, fSch, characteristic size of coupling apertures, a, relative to wavelength lambda, and characteristic size of room surfaces, l, relative to lambda define the frequency regions. At high frequencies [HF (f >> f Sch, a >> lambda, l >> lambda)], this work improves upon prior statistical-acoustics (SA) coupled-ODE models by incorporating geometrical-acoustics (GA) corrections for the model of decay within subrooms and the model of energy transfer between subrooms. Previous researchers developed prediction algorithms based on computational GA. Comparisons of predictions derived from beam-axis tracing with scale-model measurements indicate that systematic errors for coupled rooms result from earlier tail-correction procedures that assume constant quadratic growth of reflection density. A new algorithm is developed that uses ray tracing rather than tail correction in the late part and is shown to correct this error. At midfrequencies [MF (f >> f Sch, a ˜ lambda)], HF models are modified to account for wave effects at coupling apertures by including analytically or heuristically derived power transmission coefficients tau. This work improves upon prior SA models of this type by developing more accurate estimates of random-incidence tau. While the accuracy of the MF models is difficult to verify, scale-model measurements evidence the expected behavior. The Biot-Tolstoy-Medwin-Svensson (BTMS) time-domain edge-diffraction model is newly adapted to study transmission through apertures. Multiple-order BTMS scattering is theoretically and experimentally shown to be inaccurate due to the neglect of slope diffraction. At low frequencies (f ˜ f Sch), scale-model measurements have been qualitatively explained by application of previously developed perturbation models. Measurements newly confirm that coupling strength between three-dimensional rooms is related to unperturbed pressure distribution on the coupling surface. In Bass Hall, measurements are conducted to determine the acoustical effects of the coupled stage house on stage and in the audience area. The high-frequency predictions of statistical- and geometrical-acoustics models agree well with measured results. Predictions of the transmission coefficients of the coupling apertures agree, at least qualitatively, with the observed behavior.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
Grammatical Errors Produced by English Majors: The Translation Task
ERIC Educational Resources Information Center
Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad
2011-01-01
This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…
Sutton, G. G.; Sykes, K.
1967-01-01
1. When a subject attempts to exert a steady pressure on a joystick he makes small unavoidable errors which, irrespective of their origin or frequency, may be called tremor. 2. Frequency analysis shows that low frequencies always contribute much more to the total error than high frequencies. If the subject is not allowed to check his performance visually, but has to rely on sensations of pressure in the finger tips, etc., the error power spectrum plotted on logarithmic co-ordinates approximates to a straight line falling at 6 db/octave from 0·4 to 9 c/s. In other words the amplitude of the tremor component at each frequency is inversely proportional to frequency. 3. When the subject is given a visual indication of his errors on an oscilloscope the shape of the tremor spectrum alters. The most striking change is the appearance of a tremor peak at about 9 c/s, but there is also a significant increase of error in the range 1-4 c/s. The extent of these changes varies from subject to subject. 4. If the 9 c/s peak represents oscillation of a muscle length-servo it would appear that greater use is made of this servo when positional information is available from the eyes than when proprioceptive impulses from the limbs have to be relied on. ImagesFig. 2 PMID:6048997
Addressing the unit of analysis in medical care studies: a systematic review.
Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G
2008-06-01
We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
NASA Technical Reports Server (NTRS)
Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel
2012-01-01
Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.
Performance evaluation of wireless communications through capsule endoscope.
Takizawa, Kenichi; Aoyagi, Takahiro; Hamaguchi, Kiyoshi; Kohno, Ryuji
2009-01-01
This paper presents a performance evaluation of wireless communications applicable into a capsule endoscope. A numerical model to describe the received signal strength (RSS) radiated from a capsule-sized signal generator is derived through measurements in which a liquid phantom that has equivalent electrical constants is used. By introducing this model and taking into account the characteristics of its direction pattern of the capsule and propagation distance between the implanted capsule and on-body antenna, a cumulative distribution function (CDF) of the received SNR is evaluated. Then, simulation results related to the error ratio in the wireless channel are obtained. These results show that the frequencies of 611 MHz or lesser would be useful for the capsule endoscope applications from the view point of error rate performance. Further, we show that the use of antenna diversity brings additional gain to this application.
NASA Astrophysics Data System (ADS)
Du, J.; Kimball, J. S.; Galantowicz, J. F.; Kim, S.; Chan, S.; Reichle, R. H.; Jones, L. A.; Watts, J. D.
2017-12-01
A method to monitor global land surface water (fw) inundation dynamics was developed by exploiting the enhanced fw sensitivity of L-band (1.4 GHz) passive microwave observations from the Soil Moisture Active Passive (SMAP) mission. The L-band fw (fwLBand) retrievals were derived using SMAP H-polarization brightness temperature (Tb) observations and predefined L-band reference microwave emissivities for water and land endmembers. Potential soil moisture and vegetation contributions to the microwave signal were represented from overlapping higher frequency Tb observations from AMSR2. The resulting fwLBand global record has high temporal sampling (1-3 days) and 36-km spatial resolution. The fwLBand annual averages corresponded favourably (R=0.84, p<0.001) with a 250-m resolution static global water map (MOD44W) aggregated at the same spatial scale, while capturing significant inundation variations worldwide. The monthly fwLBand averages also showed seasonal inundation changes consistent with river discharge records within six major US river basins. An uncertainty analysis indicated generally reliable fwLBand performance for major land cover areas and under low to moderate vegetation cover, but with lower accuracy for detecting water bodies covered by dense vegetation. Finer resolution (30-m) fwLBand results were obtained for three sub-regions in North America using an empirical downscaling approach and ancillary global Water Occurrence Dataset (WOD) derived from the historical Landsat record. The resulting 30-m fwLBand retrievals showed favourable classification accuracy for water (commission error 31.84%; omission error 28.08%) and land (commission error 0.82%; omission error 0.99%) and seasonal wet and dry periods when compared to independent water maps derived from Landsat-8 imagery. The new fwLBand algorithms and continuing SMAP and AMSR2 operations provide for near real-time, multi-scale monitoring of global surface water inundation dynamics, potentially benefiting hydrological monitoring, flood assessments, and global climate and carbon modeling.
Common medial frontal mechanisms of adaptive control in humans and rodents
Frank, Michael J.; Laubach, Mark
2013-01-01
In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Reilly, Meaghan A., E-mail: moreilly@sri.utoront
Purpose: Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to themore » ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. Methods: A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. Results: The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). Conclusions: If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.« less
O'Reilly, Meaghan A; Jones, Ryan M; Birman, Gabriel; Hynynen, Kullervo
2016-09-01
Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.
O’Reilly, Meaghan A.; Jones, Ryan M.; Birman, Gabriel; Hynynen, Kullervo
2016-01-01
Purpose: Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. Methods: A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. Results: The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). Conclusions: If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available. PMID:27587036
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Random Vibration Analysis of the Tip-tilt System in the GMT Fast Steering Secondary Mirror
NASA Astrophysics Data System (ADS)
Lee, Kyoung-Don; Kim, Young-Soo; Kim, Ho-Sang; Lee, Chan-Hee; Lee, Won Gi
2017-09-01
A random vibration analysis was accomplished on the tip-tilt system of the fast steering secondary mirror (FSM) for the Giant Magellan Telescope (GMT). As the FSM was to be mounted on the top end of the secondary truss and disturbed by the winds, dynamic effects of the FSM disturbances on the tip-tilt correction performance was studied. The coupled dynamic responses of the FSM segments were evaluated with a suggested tip-tilt correction modeling. Dynamic equations for the tip-tilt system were derived from the force and moment equilibrium on the segment mirror and the geometric compatibility conditions with four design parameters. Statically stationary responses for the tip-tilt actuations to correct the wind-induced disturbances were studied with two design parameters based on the spectral density function of the star image errors in the frequency domain. Frequency response functions and root mean square values of the dynamic responses and the residual star image errors were numerically calculated for the off-axis and on-axis segments of the FSM. A prototype of on-axis segment of the FSM was developed for tip-tilt actuation tests to confirm the ratio of tip-tilt force to tip-tilt angle calculated from the suggested dynamic equations of the tip-tilt system. Tip-tilt actuation tests were executed at 4, 8 and 12 Hz by measuring displacements of piezoelectric actuators and reaction forces acting on the axial supports. The derived ratios of rms tip-tilt force to rms tip-tilt angle from tests showed a good correlation with the numerical results. The suggested process of random vibration analysis on the tip-tilt system to correct the wind-induced disturbances of the FSM segments would be useful to advance the FSM design and upgrade the capability to achieve the least residual star image errors by understanding the details of dynamics.
Rapid estimation of frequency response functions by close-range photogrammetry
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1985-01-01
The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.
Digital implementation of a laser frequency stabilisation technique in the telecommunications band
NASA Astrophysics Data System (ADS)
Jivan, Pritesh; van Brakel, Adriaan; Manuel, Rodolfo Martínez; Grobler, Michael
2016-02-01
Laser frequency stabilisation in the telecommunications band was realised using the Pound-Drever-Hall (PDH) error signal. The transmission spectrum of the Fabry-Perot cavity was used as opposed to the traditionally used reflected spectrum. A comparison was done using an analogue as well as a digitally implemented system. This study forms part of an initial step towards developing a portable optical time and frequency standard. The frequency discriminator used in the experimental setup was a fibre-based Fabry-Perot etalon. The phase sensitive system made use of the optical heterodyne technique to detect changes in the phase of the system. A lock-in amplifier was used to filter and mix the input signals to generate the error signal. This error signal may then be used to generate a control signal via a PID controller. An error signal was realised at a wavelength of 1556 nm which correlates to an optical frequency of 1.926 THz. An implementation of the analogue PDH technique yielded an error signal with a bandwidth of 6.134 GHz, while a digital implementation yielded a bandwidth of 5.774 GHz.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
Optimising 4-D surface change detection: an approach for capturing rockfall magnitude-frequency
NASA Astrophysics Data System (ADS)
Williams, Jack G.; Rosser, Nick J.; Hardy, Richard J.; Brain, Matthew J.; Afana, Ashraf A.
2018-02-01
We present a monitoring technique tailored to analysing change from near-continuously collected, high-resolution 3-D data. Our aim is to fully characterise geomorphological change typified by an event magnitude-frequency relationship that adheres to an inverse power law or similar. While recent advances in monitoring have enabled changes in volume across more than 7 orders of magnitude to be captured, event frequency is commonly assumed to be interchangeable with the time-averaged event numbers between successive surveys. Where events coincide, or coalesce, or where the mechanisms driving change are not spatially independent, apparent event frequency must be partially determined by survey interval.The data reported have been obtained from a permanently installed terrestrial laser scanner, which permits an increased frequency of surveys. Surveying from a single position raises challenges, given the single viewpoint onto a complex surface and the need for computational efficiency associated with handling a large time series of 3-D data. A workflow is presented that optimises the detection of change by filtering and aligning scans to improve repeatability. An adaptation of the M3C2 algorithm is used to detect 3-D change to overcome data inconsistencies between scans. Individual rockfall geometries are then extracted and the associated volumetric errors modelled. The utility of this approach is demonstrated using a dataset of ˜ 9 × 103 surveys acquired at ˜ 1 h intervals over 10 months. The magnitude-frequency distribution of rockfall volumes generated is shown to be sensitive to monitoring frequency. Using a 1 h interval between surveys, rather than 30 days, the volume contribution from small (< 0.1 m3) rockfalls increases from 67 to 98 % of the total, and the number of individual rockfalls observed increases by over 3 orders of magnitude. High-frequency monitoring therefore holds considerable implications for magnitude-frequency derivatives, such as hazard return intervals and erosion rates. As such, while high-frequency monitoring has potential to describe short-term controls on geomorphological change and more realistic magnitude-frequency relationships, the assessment of longer-term erosion rates may be more suited to less-frequent data collection with lower accumulative errors.
NASA Astrophysics Data System (ADS)
Jia, Mei-Hui; Wang, Cheng-Lin; Ren, Bin
2017-07-01
Stress, strain and vibration characteristics of rotor parts should be changed significantly under high acceleration, manufacturing error is one of the most important reason. However, current research on this problem has not been carried out. A rotor with an acceleration of 150,000 g is considered as the objective, the effects of manufacturing errors on rotor mechanical properties and dynamic characteristics are executed by the selection of the key affecting factors. Through the force balance equation of the rotor infinitesimal unit establishment, a theoretical model of stress calculation based on slice method is proposed and established, a formula for the rotor stress at any point derives. A finite element model (FEM) of rotor with holes is established with manufacturing errors. The changes of the stresses and strains of a rotor in parallelism and symmetry errors are analyzed, which verify the validity of the theoretical model. The pre-stressing modal analysis is performed based on the aforementioned static analysis. The key dynamic characteristics are analyzed. The results demonstrated that, as the parallelism and symmetry errors increase, the equivalent stresses and strains of the rotor slowly increase linearly, the highest growth rate does not exceed 4%, the maximum change rate of natural frequency is 0.1%. The rotor vibration mode is not significantly affected. The FEM construction method of the rotor with manufacturing errors can be utilized for the quantitative research on rotor characteristics, which will assist in the active control of rotor component reliability under high acceleration.
Fourier Transform Ion Cyclotron Resonance Mass Spectrometry at the Cyclotron Frequency.
Nagornov, Konstantin O; Kozhinov, Anton N; Tsybin, Yury O
2017-04-01
The phenomenon of ion cyclotron resonance allows for determining mass-to-charge ratio, m/z, of an ensemble of ions by means of measurements of their cyclotron frequency, ω c . In Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS), the ω c quantity is usually unavailable for direct measurements: the resonant state is located close to the reduced cyclotron frequency (ω + ), whereas the ω c and the corresponding m/z values may be calculated via theoretical derivation from an experimental estimate of the ω + quantity. Here, we describe an experimental observation of a new resonant state, which is located close to the ω c frequency and is established because of azimuthally-dependent trapping electric fields of the recently developed ICR cells with narrow aperture detection electrodes. We show that in mass spectra, peaks close to ω + frequencies can be reduced to negligible levels relative to peaks close to ω c frequencies. Due to reduced errors with which the ω c quantity is obtained, the new resonance provides a means of cyclotron frequency measurements with precision greater than that achieved when ω + frequency peaks are employed. The described phenomenon may be considered for a development into an FT-ICR MS technology with increased mass accuracy for applications in basic research, life, and environmental sciences. Graphical Abstract ᅟ.
On low-frequency errors of uniformly modulated filtered white-noise models for ground motions
Safak, Erdal; Boore, David M.
1988-01-01
Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).
A low-cost acoustic permeameter
NASA Astrophysics Data System (ADS)
Drake, Stephen A.; Selker, John S.; Higgins, Chad W.
2017-04-01
Intrinsic permeability is an important parameter that regulates air exchange through porous media such as snow. Standard methods of measuring snow permeability are inconvenient to perform outdoors, are fraught with sampling errors, and require specialized equipment, while bringing intact samples back to the laboratory is also challenging. To address these issues, we designed, built, and tested a low-cost acoustic permeameter that allows computation of volume-averaged intrinsic permeability for a homogenous medium. In this paper, we validate acoustically derived permeability of homogenous, reticulated foam samples by comparison with results derived using a standard flow-through permeameter. Acoustic permeameter elements were designed for use in snow, but the measurement methods are not snow-specific. The electronic components - consisting of a signal generator, amplifier, speaker, microphone, and oscilloscope - are inexpensive and easily obtainable. The system is suitable for outdoor use when it is not precipitating, but the electrical components require protection from the elements in inclement weather. The permeameter can be operated with a microphone either internally mounted or buried a known depth in the medium. The calibration method depends on choice of microphone positioning. For an externally located microphone, calibration was based on a low-frequency approximation applied at 500 Hz that provided an estimate of both intrinsic permeability and tortuosity. The low-frequency approximation that we used is valid up to 2 kHz, but we chose 500 Hz because data reproducibility was maximized at this frequency. For an internally mounted microphone, calibration was based on attenuation at 50 Hz and returned only intrinsic permeability. We found that 50 Hz corresponded to a wavelength that minimized resonance frequencies in the acoustic tube and was also within the response limitations of the microphone. We used reticulated foam of known permeability (ranging from 2 × 10-7 to 3 × 10-9 m2) and estimated tortuosity of 1.05 to validate both methods. For the externally mounted microphone the mean normalized standard deviation was 6 % for permeability and 2 % for tortuosity. The mean relative error from known measurements was 17 % for permeability and 2 % for tortuosity. For the internally mounted microphone the mean normalized standard deviation for permeability was 10 % and the relative error was also 10 %. Permeability determination for an externally mounted microphone is less sensitive to environmental noise than is the internally mounted microphone and is therefore the recommended method. The approximation using the internally mounted microphone was developed as an alternative for circumstances in which placing the microphone in the medium was not feasible. Environmental noise degrades precision of both methods and is recognizable as increased scatter for replicate data points.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
NASA Astrophysics Data System (ADS)
Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi
2016-04-01
The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
Generalized Autobalanced Ramsey Spectroscopy of Clock Transitions
NASA Astrophysics Data System (ADS)
Yudin, V. I.; Taichenachev, A. V.; Basalaev, M. Yu.; Zanon-Willette, T.; Pollock, J. W.; Shuker, M.; Donley, E. A.; Kitching, J.
2018-05-01
When performing precision measurements, the quantity being measured is often perturbed by the measurement process itself. Such measurements include precision frequency measurements for atomic clock applications carried out with Ramsey spectroscopy. With the aim of eliminating probe-induced perturbations, a method of generalized autobalanced Ramsey spectroscopy (GABRS) is presented and rigorously substantiated. The usual local-oscillator frequency control loop is augmented with a second control loop derived from secondary Ramsey sequences interspersed with the primary sequences and with a different Ramsey period. This second loop feeds back to a secondary clock variable and ultimately compensates for the perturbation of the clock frequency caused by the measurements in the first loop. We show that such a two-loop scheme can lead to perfect compensation for measurement-induced light shifts and does not suffer from the effects of relaxation, time-dependent pulse fluctuations and phase-jump modulation errors that are typical of other hyper-Ramsey schemes. Several variants of GABRS are explored based on different secondary variables including added relative phase shifts between Ramsey pulses, external frequency-step compensation, and variable second-pulse duration. We demonstrate that a universal antisymmetric error signal, and hence perfect compensation at a finite modulation amplitude, is generated only if an additional frequency step applied during both Ramsey pulses is used as the concomitant variable parameter. This universal technique can be applied to the fields of atomic clocks, high-resolution molecular spectroscopy, magnetically induced and two-photon probing schemes, Ramsey-type mass spectrometry, and the field of precision measurements. Some variants of GABRS can also be applied for rf atomic clocks using coherent-population-trapping-based Ramsey spectroscopy of the two-photon dark resonance.
Frequency synchronization of a frequency-hopped MFSK communication system
NASA Technical Reports Server (NTRS)
Huth, G. K.; Polydoros, A.; Simon, M. K.
1981-01-01
This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
Self-tuning regulators for multicyclic control of helicopter vibration
NASA Technical Reports Server (NTRS)
Johnson, W.
1982-01-01
A class of algorithms for the multicyclic control of helicopter vibration and loads is derived and discussed. This class is characterized by a linear, quasi-static, frequency-domain model of the helicopter response to control; identification of the helicopter model by least-squared-error or Kalman filter methods; and a minimum variance or quadratic performance function controller. Previous research on such controllers is reviewed. The derivations and discussions cover the helicopter model; the identification problem, including both off-line and on-line (recursive) algorithms; the control problem, including both open-loop and closed-loop feedback; and the various regulator configurations possible within this class. Conclusions from analysis and numerical simulations of the regulators provide guidance in the design and selection of algorithms for further development, including wind tunnel and flight tests.
NASA Astrophysics Data System (ADS)
Begum, A. Yasmine; Gireesh, N.
2018-04-01
In superheater, steam temperature is controlled in a cascade control loop. The cascade control loop consists of PI and PID controllers. To improve the superheater steam temperature control the controller's gains in a cascade control loop has to be tuned efficiently. The mathematical model of the superheater is derived by sets of nonlinear partial differential equations. The tuning methods taken for study here are designed for delay plus first order transfer function model. Hence from the dynamical model of the superheater, a FOPTD model is derived using frequency response method. Then by using Chien-Hrones-Reswick Tuning Algorithm and Gain-Phase Assignment Algorithm optimum controller gains has been found out based on the least value of integral time weighted absolute error.
Lorentz Atom Revisited by Solving the Abraham-Lorentz Equation of Motion
NASA Astrophysics Data System (ADS)
Bosse, Jürgen
2017-08-01
By solving the non-relativistic Abraham-Lorentz (AL) equation, I demonstrate that the AL equation of motion is not suited for treating the Lorentz atom, because a steady-state solution does not exist. The AL equation serves as a tool, however, for deducing the appropriate parameters Ω and Γ to be used with the equation of forced oscillations in modelling the Lorentz atom. The electric polarisability, which many authors "derived" from the AL equation in recent years, is shown to violate Kramers-Kronig relations rendering obsolete the extracted photon-absorption rate, for example. Fortunately, errors turn out to be small quantitatively, as long as the light frequency ω is neither too close to nor too far from the resonance frequency Ω. The polarisability and absorption cross section are derived for the Lorentz atom by purely classical reasoning and are shown to agree with the quantum mechanical calculations of the same quantities. In particular, oscillator parameters Ω and Γ deduced by treating the atom as a quantum oscillator are found to be equivalent to those derived from the classical AL equation. The instructive comparison provides a deep insight into understanding the great success of Lorentz's model that was suggested long before the advent of quantum theory.
A new polishing process for large-aperture and high-precision aspheric surface
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci
2013-07-01
The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.
ERIC Educational Resources Information Center
Stokes, Stephanie F.; Lau, Jessica Tse-Kay; Ciocca, Valter
2002-01-01
This study examined the interaction of ambient frequency and feature complexity in the diphthong errors produced by 13 Cantonese-speaking children with phonological disorders. Perceptual analysis of 611 diphthongs identified those most frequently and least frequently in error. Suggested treatment guidelines include consideration of three factors:…
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Zi, Fei; Wu, Xuejian; Zhong, Weicheng; Parker, Richard H; Yu, Chenghui; Budker, Simon; Lu, Xuanhui; Müller, Holger
2017-04-01
We present a hybrid laser frequency stabilization method combining modulation transfer spectroscopy (MTS) and frequency modulation spectroscopy (FMS) for the cesium D2 transition. In a typical pump-probe setup, the error signal is a combination of the DC-coupled MTS error signal and the AC-coupled FMS error signal. This combines the long-term stability of the former with the high signal-to-noise ratio of the latter. In addition, we enhance the long-term frequency stability with laser intensity stabilization. By measuring the frequency difference between two independent hybrid spectroscopies, we investigate the short-and long-term stability. We find a long-term stability of 7.8 kHz characterized by a standard deviation of the beating frequency drift over the course of 10 h and a short-term stability of 1.9 kHz characterized by an Allan deviation of that at 2 s of integration time.
Apparatus and Method to Enable Precision and Fast Laser Frequency Tuning
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R. (Inventor); Numata, Kenji (Inventor); Wu, Stewart T. (Inventor); Yang, Guangning (Inventor)
2015-01-01
An apparatus and method is provided to enable precision and fast laser frequency tuning. For instance, a fast tunable slave laser may be dynamically offset-locked to a reference laser line using an optical phase-locked loop. The slave laser is heterodyned against a reference laser line to generate a beatnote that is subsequently frequency divided. The phase difference between the divided beatnote and a reference signal may be detected to generate an error signal proportional to the phase difference. The error signal is converted into appropriate feedback signals to phase lock the divided beatnote to the reference signal. The slave laser frequency target may be rapidly changed based on a combination of a dynamically changing frequency of the reference signal, the frequency dividing factor, and an effective polarity of the error signal. Feed-forward signals may be generated to accelerate the slave laser frequency switching through laser tuning ports.
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.
Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian
2014-03-01
Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
NASA Astrophysics Data System (ADS)
Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin
2017-04-01
An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.
Viallon, Magalie; Terraz, Sylvain; Roland, Joerg; Dumont, Erik; Becker, Christoph D; Salomir, Rares
2010-04-01
MR thermometry based on the proton resonance frequency shift (PRFS) is the most commonly used method for the monitoring of thermal therapies. As the chemical shift of water protons is temperature dependent, the local temperature variation (relative to an initial baseline) may be calculated from time-dependent phase changes in gradient-echo (GRE) MR images. Dynamic phase shift in GRE images is also produced by time-dependent changes in the magnetic bulk susceptibility of tissue. Gas bubbles (known as "white cavitation") are frequently visualized near the RF electrode in ultrasonography-guided radio frequency ablation (RFA). This study aimed to investigate RFA-induced cavitation's effects by using simultaneous ultrasonography and MRI, to both visualize the cavitation and quantify the subsequent magnetic susceptibility-mediated errors in concurrent PRFS MR-thermometry (MRT) as well as to propose a first-order correction for the latter errors. RF heating in saline gels and in ex vivo tissues was performed with MR-compatible bipolar and monopolar electrodes inside a 1.5 T MR clinical scanner. Ultrasonography simultaneous to PRFS MRT was achieved using a MR-compatible phased-array ultrasonic transducer. PRFS MRT was performed interleaved in three orthogonal planes and compared to measurements from fluoroptic sensors, under low and, respectively, high RFA power levels. Control experiments were performed to isolate the main source of errors in standard PRFS thermometry. Ultrasonography, MRI and digital camera pictures clearly demonstrated generation of bubbles every time when operating the radio frequency equipment at therapeutic powers (> or = 30 W). Simultaneous bimodal (ultrasonography and MRI) monitoring of high power RF heating demonstrated a correlation between the onset of the PRFS-thermometry errors and the appearance of bubbles around the applicator. In an ex vivo study using a bipolar RF electrode under low power level (5 W), the MR measured temperature curves accurately matched the reference fluoroptic data. In similar ex vivo studies when applying higher RFA power levels (30 W), the correlation plots of MR thermometry versus fluoroptic data showed large errors in PRFS-derived temperature (up to 45 degrees C absolute deviation, positive or negative) depending not only on fluoroptic tip position but also on the RF electrode orientation relative to the B0 axis. Regions with apparent decrease in the PRFS-derived temperature maps as much as 30 degrees C below the initial baseline were visualized during RFA high power application. Ex vivo data were corrected assuming a Gaussian dynamic source of susceptibility, centered in the anode/cathode gap of the RF bipolar electrode. After correction, the temperature maps recovered the revolution symmetry pattern predicted by theory and matched the fluoroptic data within 4.5 degrees C mean offset. RFA induces dynamic changes in magnetic bulk susceptibility in biological tissue, resulting in large and spatially dependent errors of phase-subtraction-only PRFS MRT and unexploitable thermal dose maps. These thermometry artifacts were strongly correlated with the appearance of transient cavitation. A first-order dynamic model of susceptibility provided a useful method for minimizing these artifacts in phantom and ex vivo experiments.
Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook
2017-06-20
The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5 mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3 mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.
Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur
2016-07-25
In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.
Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin
2016-01-01
Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448
Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals
NASA Technical Reports Server (NTRS)
Lockard, David P.; Casper, Jay H.
2005-01-01
The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.
NASA Astrophysics Data System (ADS)
Yoshida, Kenichiro; Nishidate, Izumi; Ojima, Nobutoshi; Iwata, Kayoko
2014-01-01
To quantitatively evaluate skin chromophores over a wide region of curved skin surface, we propose an approach that suppresses the effect of the shading-derived error in the reflectance on the estimation of chromophore concentrations, without sacrificing the accuracy of that estimation. In our method, we use multiple regression analysis, assuming the absorbance spectrum as the response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as the predictor variables. The concentrations of melanin and total hemoglobin are determined from the multiple regression coefficients using compensation formulae (CF) based on the diffuse reflectance spectra derived from a Monte Carlo simulation. To suppress the shading-derived error, we investigated three different combinations of multiple regression coefficients for the CF. In vivo measurements with the forearm skin demonstrated that the proposed approach can reduce the estimation errors that are due to shading-derived errors in the reflectance. With the best combination of multiple regression coefficients, we estimated that the ratio of the error to the chromophore concentrations is about 10%. The proposed method does not require any measurements or assumptions about the shape of the subjects; this is an advantage over other studies related to the reduction of shading-derived errors.
Sigmoid function based integral-derivative observer and application to autopilot design
NASA Astrophysics Data System (ADS)
Shao, Xingling; Wang, Honglun; Liu, Jun; Tang, Jun; Li, Jie; Zhang, Xiaoming; Shen, Chong
2017-02-01
To handle problems of accurate signal reconstruction and controller implementation with integral and derivative components in the presence of noisy measurement, motivated by the design principle of sigmoid function based tracking differentiator and nonlinear continuous integral-derivative observer, a novel integral-derivative observer (SIDO) using sigmoid function is developed. The key merit of the proposed SIDO is that it can simultaneously provide continuous integral and differential estimates with almost no drift phenomena and chattering effect, as well as acceptable noise-tolerance performance from output measurement, and the stability is established based on exponential stability and singular perturbation theory. In addition, the effectiveness of SIDO in suppressing drift phenomena and high frequency noises is firstly revealed using describing function and confirmed through simulation comparisons. Finally, the theoretical results on SIDO are demonstrated with application to autopilot design: 1) the integral and tracking estimates are extracted from the sensed pitch angular rate contaminated by nonwhite noises in feedback loop, 2) the PID(proportional-integral-derivative) based attitude controller is realized by adopting the error estimates offered by SIDO instead of using the ideal integral and derivative operator to achieve satisfactory tracking performance under control constraint.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Atwood, E.L.
1958-01-01
Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.
NASA Technical Reports Server (NTRS)
Delp, P.; Crossman, E. R. F. W.; Szostak, H.
1972-01-01
The automobile-driver describing function for lateral position control was estimated for three subjects from frequency response analysis of straight road test results. The measurement procedure employed an instrumented full size sedan with known steering response characteristics, and equipped with a lateral lane position measuring device based on video detection of white stripe lane markings. Forcing functions were inserted through a servo driven double steering wheel coupling the driver to the steering system proper. Random appearing, Gaussian, and transient time functions were used. The quasi-linear models fitted to the random appearing input frequency response characterized the driver as compensating for lateral position error in a proportional, derivative, and integral manner. Similar parameters were fitted to the Gabor transformed frequency response of the driver to transient functions. A fourth term corresponding to response to lateral acceleration was determined by matching the time response histories of the model to the experimental results. The time histories show evidence of pulse-like nonlinear behavior during extended response to step transients which appear as high frequency remnant power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein
2014-11-15
In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less
Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leake, James E.; Linton, Mark G.; Schuck, Peter W., E-mail: james.e.leake@nasa.gov
Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the development of coronal models which are “data-driven” at the photosphere. We present an investigation to determine the feasibility and accuracy of such methods. Our validation framework uses a simulation of active region (AR) formation, modeling the emergence of magnetic flux from the convection zone to the corona, as a ground-truth data set, to supply both the photospheric information and to perform the validation of the data-driven method. We focus ourmore » investigation on how the accuracy of the data-driven model depends on the temporal frequency of the driving data. The Helioseismic and Magnetic Imager on NASA’s Solar Dynamics Observatory produces full-disk vector magnetic field measurements at a 12-minute cadence. Using our framework we show that ARs that emerge over 25 hr can be modeled by the data-driving method with only ∼1% error in the free magnetic energy, assuming the photospheric information is specified every 12 minutes. However, for rapidly evolving features, under-sampling of the dynamics at this cadence leads to a strobe effect, generating large electric currents and incorrect coronal morphology and energies. We derive a sampling condition for the driving cadence based on the evolution of these small-scale features, and show that higher-cadence driving can lead to acceptable errors. Future work will investigate the source of errors associated with deriving plasma variables from the photospheric magnetograms as well as other sources of errors, such as reduced resolution, instrument bias, and noise.« less
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.
Impact of the Combination of GNSS and Altimetry Data on the Derived Global Ionosphere Maps
NASA Astrophysics Data System (ADS)
Todorova, S.; Schuh, H.; Hobiger, T.; Hernandez-Pajares, M.
2007-05-01
The classical input data for development of Global Ionosphere Maps (GIM) of the Total Electron Content (TEC) is the so called "geometry free linear combination", obtained from the dual-frequency Global Navigation Satellite System (GNSS) observations. Such maps in general achieve good quality of the ionosphere representation. However, the GNSS stations are inhomogeneously distributed, with large gaps particularly over the sea surface, which lowers the precision of the GIM over these areas. On the other hand, the dual-frequency satellite altimetry missions such as Jason-1 and TOPEX/Poseidon provide information about the parameter of the ionosphere precisely above the sea surface, where the altimetry observations are preformed. Due to the limited spread of the measurements and some open issues related to systematic errors, the ionospheric data from satellite altimetry is used only for cross-validation of the GNSS GIM. It can be anticipated however, that some specifics of the ionosphere parameter derived by satellite altimetry will partly balance the inhomogeneity of the GNSS data. Such important features are complementing in the global resolution, different biasing and the absence of additional mapping, as it is the case in GNSS. In this study we create two-hourly GIM from GNSS data and additionally introduce satellite altimetry observations, which help to compensate the insufficient GNSS coverage of the oceans. The combination of the data from around 180 GNSS stations and the satellite altimetry mission Jason-1 is performed on the normal equation level. The comparison between the integrated ionosphere models and the GNSS-only maps shows a higher accuracy of the combined GIM over the seas. A further effect of the combination is that the method allows the independent estimation of daily values of the Differential Code Biases (DCB) for all GNSS satellites and receivers, and of the systematic errors affecting the altimetry measurements. Such errors should include a hardware delay similar to the GNSS DCB as well as the impact of the topside ionosphere, which is not sampled by Jason-1. At this stage, for testing purposes we estimate a constant daily value, which will be further investigated. The final aim of the study is the development of improved combined global TEC maps, which make best use of the advantages of each particular type of data and have higher accuracy and reliability than the results derived by the two methods if treated individually.
Electrical and magnetic properties of rock and soil
Scott, J.H.
1983-01-01
Field and laboratory measurements have been made to determine the electrical conductivity, dielectric constant, and magnetic permeability of rock and soil in areas of interest in studies of electromagnetic pulse propagation. Conductivity is determined by making field measurements of apparent resisitivity at very low frequencies (0-20 cps), and interpreting the true resistivity of layers at various depths by curve-matching methods. Interpreted resistivity values are converted to corresponding conductivity values which are assumed to be applicable at 10^2 cps, an assumption which is considered valid because the conductivity of rock and soil is nearly constant at frequencies below 10^2 cps. Conductivity is estimated at higher frequencies (up to 10^6 cps) by using statistical correlations of three parameters obtained from laboratory measurements of rock and soil samples: conductivity at 10^2 cps, frequency and conductivity measured over the range 10^2 to 10^6 cps. Conductivity may also be estimated in this frequency range by using field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and conductivity measured over the range 10^2 to 10^6 cps. This method is less accurate because nonrandom variation of ion concentration in natural pore water introduces error. Dielectric constant is estimated in a similar manner from field-derived conductivity values applicable at 10^2 cps and statistical correlations of three parameters obtained from laboratory measurements of samples: conductivity measured at 10^2 cps, frequency, and dielectric constant measured over the frequency range 10^2 to 10^6 cps. Dielectric constant may also be estimated from field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and dielectric constant measured from 10^2 to 10^6 cps, but again, this method is less accurate because of variation of ion concentration of pore water. Special laboratory procedures are used to measure conductivity and dielectric constant of rock and soil samples. Electrode polarization errors are minimized by using an electrode system that is electrochemically reversible-with ions in pore water.
A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.
Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W
2012-09-01
In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.
An optical ASK and FSK phase diversity transmission system
NASA Astrophysics Data System (ADS)
Vandenboom, H.; Vanetten, W.; Dekrom, W. H. C.; Vanbennekom, P.; Huijskens, F.; Niessen, L.; Deleijer, F.
1992-12-01
The results of a contribution to an electrooptical project for a 'phase diversity system', covering ASK and FSK (Amplitude and Frequency Shift Keying), are described. Specifications of subsystems, and tolerances and consequences of these tolerances for the final system performance, were derived. For the optical network of the phase diversity receiver, a manufacturing set up for three by three fused biconical taper fiber couplers was developed. In order to characterize planar optical networks, a set up was constructed to measure the phase relations at 1523 nm. The optical frequency of the local oscillator laser has to be locked on to the frequency of the received optical signal. This locking circuit is described. A complete optical three by three phase diversity transmission system was developed that can be used as a testbed for subsystems. The sensitivity of the receiver at a bit error rate of 10 to the minus 9th power is -47.2 dBm, which is 4.2 dB better than the value of the specifications.
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie
2004-01-01
This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.
NASA Astrophysics Data System (ADS)
Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian
2018-03-01
The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.
NASA Astrophysics Data System (ADS)
Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating
2018-06-01
The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.
Higher-order differential phase shift keyed modulation
NASA Astrophysics Data System (ADS)
Vanalphen, Deborah K.; Lindsey, William C.
1994-02-01
Advanced modulation/demodulation techniques which are robust in the presence of phase and frequency uncertainties continue to be of interest to communication engineers. We are particularly interested in techniques which accommodate slow channel phase and frequency variations with minimal performance degradation and which alleviate the need for phase and frequency tracking loops in the receiver. We investigate the performance sensitivity to frequency offsets of a modulation technique known as binary Double Differential Phase Shift Keying (DDPSK) and compare it to that of classical binary Differential Phase Shift Keying (DPSK). We also generalize our analytical results to include n(sup -th) order, M-ary DPSK. The DDPSK (n = 2) technique was first introduced in the Russian literature circa 1972 and was studied more thoroughly in the late 1970's by Pent and Okunev. Here, we present an expression for the symbol error probability that is easy to derive and to evaluate numerically. We also present graphical results that establish when, as a function of signal energy-to-noise ratio and normalized frequency offset, binary DDPSK is preferable to binary DPSK with respect to performance in additive white Gaussian noise. Finally, we provide insight into the optimum receiver from a detection theory viewpoint.
Month-to-month and year-to-year reproducibility of high frequency QRS ECG signals
NASA Technical Reports Server (NTRS)
Batdorf, Niles J.; Feiveson, Alan H.; Schlegel, Todd T.
2004-01-01
High frequency electrocardiography analyzing the entire QRS complex in the frequency range of 150 to 250 Hz may prove useful in the detection of coronary artery disease, yet the long-term stability of these waveforms has not been fully characterized. Therefore, we prospectively investigated the reproducibility of the root mean squared voltage, kurtosis, and the presence versus absence of reduced amplitude zones in signal averaged 12-lead high frequency QRS recordings acquired in the supine position one month apart in 16 subjects and one year apart in 27 subjects. Reproducibility of root mean squared voltage and kurtosis was excellent over these time intervals in the limb leads, and acceptable in the precordial leads using both the V-lead and CR-lead derivations. The relative error of root mean squared voltage was 12% month-to-month and 16% year-to-year in the serial recordings when averaged over all 12 leads. Reduced amplitude zones were also reproducible up to a rate of 87% and 81%, respectively, for the month-to-month and year-to-year recordings. We conclude that 12-lead high frequency QRS electrocardiograms are sufficiently reproducible for clinical use.
Optimization of the R-SQUID noise thermometer
NASA Astrophysics Data System (ADS)
Seppä, Heikki
1986-02-01
The Josephson junction can be used to convert voltage into frequency and thus it can be used to convert voltage fluctuations generated by Johnson noise in a resistor into frequency fluctuations. As a consequence, the temperature of the resistor can be defined by measuring the variance of the frequency fluctuations. Unfortunately, the absolute determination of temperature by this approach is disturbed by several undesirable effects: a rolloff introduced by the bandwidth of the postdetection filter, additional noise caused by rf amplifiers, and a mixed noise effect caused by the nonlinearity of the Josephson junction together with rf noise in the tank circuit. Furthermore, the variance is a statistical quantity and therefore the limited number of frequency counts produces inaccuracy in a temperature measurement. In this work the total inaccuracy of the noise thermometer is analyzed and the optimal choice of the parameters is derived. A practical way to find the optimal conditions for the Josephson junction noise thermometer is discussed. The inspection shows that under the optimal conditions the total error is dependent only on the temperature under determination, the equivalent noise temperature of the preamplifier, the bias frequency of the SQUID, and the total time used for the measurement.
NASA Technical Reports Server (NTRS)
Brentner, K. S.
1986-01-01
A computer program has been developed at the Langley Research Center to predict the discrete frequency noise of conventional and advanced helicopter rotors. The program, called WOPWOP, uses the most advanced subsonic formulation of Farassat that is less sensitive to errors and is valid for nearly all helicopter rotor geometries and flight conditions. A brief derivation of the acoustic formulation is presented along with a discussion of the numerical implementation of the formulation. The computer program uses realistic helicopter blade motion and aerodynamic loadings, input by the user, for noise calculation in the time domain. A detailed definition of all the input variables, default values, and output data is included. A comparison with experimental data shows good agreement between prediction and experiment; however, accurate aerodynamic loading is needed.
Liu, Wei; Yao, Kainan; Huang, Danian; Lin, Xudong; Wang, Liang; Lv, Yaowen
2016-06-13
The Greenwood frequency (GF) is influential in performance improvement for the coherent free space optical communications (CFSOC) system with a closed-loop adaptive optics (AO) unit. We analyze the impact of tilt and high-order aberrations on the mixing efficiency (ME) and bit-error-rate (BER) under different GF. The root-mean-square value (RMS) of the ME related to the RMS of the tilt aberrations, and the GF is derived to estimate the volatility of the ME. Furthermore, a numerical simulation is applied to verify the theoretical analysis, and an experimental correction system is designed with a double-stage fast-steering-mirror and a 97-element continuous surface deformable mirror. The conclusions of this paper provide a reference for designing the AO system for the CFSOC system.
Petrovic, Ljubomir M; Zorica, Dusan M; Stojanac, Igor Lj; Krstonosic, Veljko S; Hadnadjev, Miroslav S; Janev, Marko B; Premovic, Milica T; Atanackovic, Teodor M
2015-08-01
In this study we analyze viscoelastic properties of three flowable (Wave, Wave MV, Wave HV) and one universal hybrid resin (Ice) composites, prior to setting. We developed a mathematical model containing fractional derivatives in order to describe their properties. Isothermal experimental study was conducted on a rheometer with parallel plates. In dynamic oscillatory shear test, storage and loss modulus, as well as the complex viscosity where determined. We assumed four different fractional viscoelastic models, each belonging to one particular class, derivable from distributed-order fractional constitutive equation. The restrictions following from the Second law of thermodynamics are imposed on each model. The optimal parameters corresponding to each model are obtained by minimizing the error function that takes into account storage and loss modulus, thus obtaining the best fit to the experimental data. In the frequency range considered, we obtained that for Wave HV and Wave MV there exist a critical frequency for which loss and storage modulus curves intersect, defining a boundary between two different types of behavior: one in which storage modulus is larger than loss modulus and the other in which the situation is opposite. Loss and storage modulus curves for Ice and Wave do not show this type of behavior, having either elastic, or viscous effects dominating in entire frequency range considered. The developed models may be used to predict behavior of four tested composites in different flow conditions (different deformation speed), thus helping to estimate optimal handling characteristics for specific clinical applications. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
NASA Astrophysics Data System (ADS)
Dolman, A. M.; Laepple, T.; Kunz, T.
2017-12-01
Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.
Dynamic Magnetostriction of CoFe2 O4 and Its Role in Magnetoelectric Composites
NASA Astrophysics Data System (ADS)
Aubert, A.; Loyau, V.; Pascal, Y.; Mazaleyrat, F.; LoBue, M.
2018-04-01
Applications of magnetostrictive materials commonly involve the use of the dynamic deformation, i.e., the piezomagnetic effect. Usually, this effect is described by the strain derivative ∂λ /∂H , which is deduced from the quasistatic magnetostrictive curve. However, the strain derivative might not be accurate to describe dynamic deformation in semihard materials as cobalt ferrite (CFO). To highlight this issue, dynamic magnetostriction measurements of cobalt ferrite are performed and compared with the strain derivative. The experiment shows that measured piezomagnetic coefficients are much lower than the strain derivative. To point out the direct application of this effect, low-frequency magnetoelectric (ME) measurements are also conducted on bilayers CFO /Pb (Zr ,Ti )O3 . The experimental data are compared with calculated magnetoelectric coefficients which include a measured dynamic coefficient and result in very low relative error (<5 %), highlighting the relevance of using a piezomagnetic coefficient derived from dynamic magnetostriction instead of a strain derivative coefficient to model ME composites. The magnetoelectric effect is then measured for several amplitudes of the alternating field Hac, and a nonlinear response is revealed. Based on these results, a trilayer CFO/Pb (Zr ,Ti )O3 /CFO is made exhibiting a high magnetoelectric coefficient of 578 mV /A (approximately 460 mV /cm Oe ) in an ac field of 38.2 kA /m (about 48 mT) at low frequency, which is 3 times higher than the measured value at 0.8 kA /m (approximately 1 mT). We discuss the viability of using semihard materials like cobalt ferrite for dynamic magnetostrictive applications such as the magnetoelectric effect.
NASA Astrophysics Data System (ADS)
Wiese, D. N.; McCullough, C. M.
2017-12-01
Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.
2009-01-01
This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Text familiarity, word frequency, and sentential constraints in error detection.
Pilotti, Maura; Chodorow, Martin; Schauss, Frances
2009-12-01
The present study examines whether the frequency of an error-bearing word and its predictability, arising from sentential constraints and text familiarity, either independently or jointly, would impair error detection by making proofreading driven by top-down processes. Prior to a proofreading task, participants were asked to read, copy, memorize, or paraphrase sentences, half of which contained errors. These tasks represented a continuum of progressively more demanding and time-consuming activities, which were thought to lead to comparable increases in text familiarity and thus predictability. Proofreading times were unaffected by whether the sentences had been encountered earlier. Proofreading was slower and less accurate for high-frequency words and for highly constrained sentences. Prior memorization produced divergent effects on accuracy depending on sentential constraints. The latter finding suggested that a substantial level of predictability, such as that produced by memorizing highly constrained sentences, can increase the probability of overlooking errors.
Chen, Runlin; Wei, Yangyang; Shi, Zhaoyang; Yuan, Xiaoyang
2016-01-01
The identification accuracy of dynamic characteristics coefficients is difficult to guarantee because of the errors of the measurement system itself. A novel dynamic calibration method of measurement system for dynamic characteristics coefficients is proposed in this paper to eliminate the errors of the measurement system itself. Compared with the calibration method of suspension quality, this novel calibration method is different because the verification device is a spring-mass system, which can simulate the dynamic characteristics of sliding bearing. The verification device is built, and the calibration experiment is implemented in a wide frequency range, in which the bearing stiffness is simulated by the disc springs. The experimental results show that the amplitude errors of this measurement system are small in the frequency range of 10 Hz–100 Hz, and the phase errors increase along with the increasing of frequency. It is preliminarily verified by the simulated experiment of dynamic characteristics coefficients identification in the frequency range of 10 Hz–30 Hz that the calibration data in this frequency range can support the dynamic characteristics test of sliding bearing in this frequency range well. The bearing experiments in greater frequency ranges need higher manufacturing and installation precision of calibration device. Besides, the processes of calibration experiments should be improved. PMID:27483283
Precision Saturated Absorption Spectroscopy of H3+
NASA Astrophysics Data System (ADS)
Guan, Yu-chan; Liao, Yi-Chieh; Chang, Yung-Hsiang; Peng, Jin-Long; Shy, Jow-Tsong
2016-06-01
In our previous work on the Lamb dips of the νb{2} fundamental band of H3+, the saturated absorption spectrum was obtained by the third-derivative spectroscopy using frequency modulation [1]. However, the frequency modulation also causes error in absolute frequency determination. To solve this problem, we have built an offset-locking system to lock the OPO pump frequency to an iodine-stabilized Nd:YAG laser. With this modification, we are able to scan the OPO idler frequency precisely and obtain the profile of the Lamb dips. Double modulation (amplitude modulation of the idler power and concentration modulation of the ion) is employed to subtract the interference fringes of the signal and increase the signal-to-noise ratio effectively. To Determine the absolute frequency of the idler wave, the pump wave is offset locked on the R(56) 32-0 a10 hyperfine component of 127I2, and the signal wave is locked on a GPS disciplined fiber optical frequency comb (OFC). All references and lock systems have absolute frequency accuracy better than 10 kHz. Here, we demonstrate its performance by measuring one transition of methane and sixteen transitions of H3+. This instrument could pave the way for the high-resolution spectroscopy of a variety of molecular ions. [1] H.-C. Chen, C.-Y. Hsiao, J.-L. Peng, T. Amano, and J.-T. Shy, Phys. Rev. Lett. 109, 263002 (2012).
An experimental system for the study of active vibration control - Development and modeling
NASA Astrophysics Data System (ADS)
Batta, George R.; Chen, Anning
A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.
Real-Time Stability and Control Derivative Extraction From F-15 Flight Data
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.
2003-01-01
A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.
Analytic second derivatives of the energy in the fragment molecular orbital method
NASA Astrophysics Data System (ADS)
Nakata, Hiroya; Nagata, Takeshi; Fedorov, Dmitri G.; Yokojima, Satoshi; Kitaura, Kazuo; Nakamura, Shinichiro
2013-04-01
We developed the analytic second derivatives of the energy for the fragment molecular orbital (FMO) method. First we derived the analytic expressions and then introduced some approximations related to the first and second order coupled perturbed Hartree-Fock equations. We developed a parallel program for the FMO Hessian with approximations in GAMESS and used it to calculate infrared (IR) spectra and Gibbs free energies and to locate the transition states in SN2 reactions. The accuracy of the Hessian is demonstrated in comparison to ab initio results for polypeptides and a water cluster. By using the two residues per fragment division, we achieved the accuracy of 3 cm-1 in the reduced mean square deviation of vibrational frequencies from ab initio for all three polyalanine isomers, while the zero point energy had the error not exceeding 0.3 kcal/mol. The role of the secondary structure on IR spectra, zero point energies, and Gibbs free energies is discussed.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Knobel, Mark; Finkbeiner, Matthew; Caramazza, Alfonso
2008-03-01
The effect of lexical frequency on language-processing tasks is exceptionally reliable. For example, pictures with higher frequency names are named faster and more accurately than those with lower frequency names. Experiments with normal participants and patients strongly suggest that this production effect arises at the level of lexical access. Further work has suggested that within lexical access this effect arises at the level of lexical representations. Here we present patient E.C. who shows an effect of lexical frequency on his nonword error rate. The best explanation of his performance is that there is an additional locus of frequency at the interface of lexical and segmental representational levels. We confirm this hypothesis by showing that only computational models with frequency at this new locus can produce a similar error pattern to that of patient E.C. Finally, in an analysis of a large group of Italian patients, we show that there exist patients who replicate E.C.'s pattern of results and others who show the complementary pattern of frequency effects on semantic error rates. Our results combined with previous findings suggest that frequency plays a role throughout the process of lexical access.
Quantifying Uncertainty in Instantaneous Orbital Data Products of TRMM over Indian Subcontinent
NASA Astrophysics Data System (ADS)
Jayaluxmi, I.; Nagesh, D.
2013-12-01
In the last 20 years, microwave radiometers have taken satellite images of earth's weather proving to be a valuable tool for quantitative estimation of precipitation from space. However, along with the widespread acceptance of microwave based precipitation products, it has also been recognized that they contain large uncertainties. While most of the uncertainty evaluation studies focus on the accuracy of rainfall accumulated over time (e.g., season/year), evaluation of instantaneous rainfall intensities from satellite orbital data products are relatively rare. These instantaneous products are known to potentially cause large uncertainties during real time flood forecasting studies at the watershed scale. Especially over land regions, where the highly varying land surface emissivity offer a myriad of complications hindering accurate rainfall estimation. The error components of orbital data products also tend to interact nonlinearly with hydrologic modeling uncertainty. Keeping these in mind, the present study fosters the development of uncertainty analysis using instantaneous satellite orbital data products (version 7 of 1B11, 2A25, 2A23) derived from the passive and active sensors onboard Tropical Rainfall Measuring Mission (TRMM) satellite, namely TRMM microwave imager (TMI) and Precipitation Radar (PR). The study utilizes 11 years of orbital data from 2002 to 2012 over the Indian subcontinent and examines the influence of various error sources on the convective and stratiform precipitation types. Analysis conducted over the land regions of India investigates three sources of uncertainty in detail. These include 1) Errors due to improper delineation of rainfall signature within microwave footprint (rain/no rain classification), 2) Uncertainty offered by the transfer function linking rainfall with TMI low frequency channels and 3) Sampling errors owing to the narrow swath and infrequent visits of TRMM sensors. Case study results obtained during the Indian summer monsoon months of June-September are presented using contingency table statistics, performance diagram, scatter plots and probability density functions. Our study demonstrates that theory of copula can be efficiently used to represent the highly non linear dependency structure of rainfall with respect to TMI low frequency channels of 19, 21, 37 GHz. This questions the exclusive usage of high frequency 85 GHz channel for TMI overland rainfall retrieval algorithms. Further, the PR sampling errors revealed using a statistical bootstrap technique was found to incur relative sampling errors <30% (for 2 degree grids) over India whose magnitudes were biased towards stratiform rainfall type and sampling technique employed. These findings clearly document that proper characterization of error structure offered by TMI and PR has wider implications for decision making prior to incorporating the resulting orbital products for basin scale hydrologic modeling.
Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.
Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret
2014-01-01
Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.
Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data
NASA Astrophysics Data System (ADS)
Dias, Nelson Luís
2018-01-01
A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.
NASA Astrophysics Data System (ADS)
Baran, A. J.; Hesse, Evelyn; Sourdeval, Odran
2017-03-01
Future satellite missions, from 2022 onwards, will obtain near-global measurements of cirrus at microwave and sub-millimetre frequencies. To realise the potential of these observations, fast and accurate light-scattering methods are required to calculate scattered millimetre and sub-millimetre intensities from complex ice crystals. Here, the applicability of the ray tracing with diffraction on facets method (RTDF) in predicting the bulk scalar optical properties and phase functions of randomly oriented hexagonal ice columns and hexagonal ice aggregates at millimetre frequencies is investigated. The applicability of RTDF is shown to be acceptable down to size parameters of about 18, between the frequencies of 243 and 874 GHz. It is demonstrated that RTDF is generally well within about 10% of T-matrix solutions obtained for the scalar optical properties assuming hexagonal ice columns. Moreover, on replacing electromagnetic scalar optical property solutions obtained for the hexagonal ice aggregate with the RTDF counterparts at size parameter values of about 18 or greater, the bulk scalar optical properties can be calculated to generally well within ±5% of an electromagnetic-based database. The RTDF-derived bulk scalar optical properties result in brightness temperature errors to generally within about ±4 K at 874 GHz. Differing microphysics assumptions can easily exceed such errors. Similar findings are found for the bulk scattering phase functions. This finding is owing to the scattering solutions being dominated by the processes of diffraction and reflection, both being well described by RTDF. The impact of centimetre-sized complex ice crystals on interpreting cirrus polarisation measurements at sub-millimetre frequencies is discussed.
Mass and stiffness estimation using mobile devices for structural health monitoring
NASA Astrophysics Data System (ADS)
Le, Viet; Yu, Tzuyang
2015-04-01
In the structural health monitoring (SHM) of civil infrastructure, dynamic methods using mass, damping, and stiffness for characterizing structural health have been a traditional and widely used approach. Changes in these system parameters over time indicate the progress of structural degradation or deterioration. In these methods, capability of predicting system parameters is essential to their success. In this paper, research work on the development of a dynamic SHM method based on perturbation analysis is reported. The concept is to use externally applied mass to perturb an unknown system and measure the natural frequency of the system. Derived theoretical expressions for mass and stiffness prediction are experimentally verified by a building model. Dynamic responses of the building model perturbed by various masses in free vibration were experimentally measured by a mobile device (cell phone) to extract the natural frequency of the building model. Single-degreeof- freedom (SDOF) modeling approach was adopted for the sake of using a cell phone. From the experimental result, it is shown that the percentage error of predicted mass increases when the mass ratio increases, while the percentage error of predicted stiffness decreases when the mass ratio increases. This work also demonstrated the potential use of mobile devices in the health monitoring of civil infrastructure.
Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model
NASA Technical Reports Server (NTRS)
Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.
2010-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Computational Modeling of Morphological Effects in Bangla Visual Word Recognition.
Dasgupta, Tirthankar; Sinha, Manjira; Basu, Anupam
2015-10-01
In this paper we aim to model the organization and processing of Bangla polymorphemic words in the mental lexicon. Our objective is to determine whether the mental lexicon accesses a polymorphemic word as a whole or decomposes the word into its constituent morphemes and then recognize them accordingly. To address this issue, we adopted two different strategies. First, we conduct a masked priming experiment over native speakers. Analysis of reaction time (RT) and error rates indicates that in general, morphologically derived words are accessed via decomposition process. Next, based on the collected RT data we have developed a computational model that can explain the processing phenomena of the access and representation of Bangla derivationally suffixed words. In order to do so, we first explored the individual roles of different linguistic features of a Bangla morphologically complex word and observed that processing of Bangla morphologically complex words depends upon several factors like, the base and surface word frequency, suffix type/token ratio, suffix family size and suffix productivity. Accordingly, we have proposed different feature models. Finally, we combine these feature models together and came up with a new model that takes the advantage of the individual feature models and successfully explain the processing phenomena of most of the Bangla morphologically derived words. Our proposed model shows an accuracy of around 80% which outperforms the other related frequency models.
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Issues with data and analyses: Errors, underlying themes, and potential solutions
Allison, David B.
2018-01-01
Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Routine cognitive errors: a trait-like predictor of individual differences in anxiety and distress.
Fetterman, Adam K; Robinson, Michael D
2011-02-01
Five studies (N=361) sought to model a class of errors--namely, those in routine tasks--that several literatures have suggested may predispose individuals to higher levels of emotional distress. Individual differences in error frequency were assessed in choice reaction-time tasks of a routine cognitive type. In Study 1, it was found that tendencies toward error in such tasks exhibit trait-like stability over time. In Study 3, it was found that tendencies toward error exhibit trait-like consistency across different tasks. Higher error frequency, in turn, predicted higher levels of negative affect, general distress symptoms, displayed levels of negative emotion during an interview, and momentary experiences of negative emotion in daily life (Studies 2-5). In all cases, such predictive relations remained significant with individual differences in neuroticism controlled. The results thus converge on the idea that error frequency in simple cognitive tasks is a significant and consequential predictor of emotional distress in everyday life. The results are novel, but discussed within the context of the wider literatures that informed them. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business
Refractive errors in children and adolescents in Bucaramanga (Colombia).
Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly
2017-01-01
The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.
2009-01-01
Background The characterisation, or binning, of metagenome fragments is an important first step to further downstream analysis of microbial consortia. Here, we propose a one-dimensional signature, OFDEG, derived from the oligonucleotide frequency profile of a DNA sequence, and show that it is possible to obtain a meaningful phylogenetic signal for relatively short DNA sequences. The one-dimensional signal is essentially a compact representation of higher dimensional feature spaces of greater complexity and is intended to improve on the tetranucleotide frequency feature space preferred by current compositional binning methods. Results We compare the fidelity of OFDEG against tetranucleotide frequency in both an unsupervised and semi-supervised setting on simulated metagenome benchmark data. Four tests were conducted using assembler output of Arachne and phrap, and for each, performance was evaluated on contigs which are greater than or equal to 8 kbp in length and contigs which are composed of at least 10 reads. Using both G-C content in conjunction with OFDEG gave an average accuracy of 96.75% (semi-supervised) and 95.19% (unsupervised), versus 94.25% (semi-supervised) and 82.35% (unsupervised) for tetranucleotide frequency. Conclusion We have presented an observation of an alternative characteristic of DNA sequences. The proposed feature representation has proven to be more beneficial than the existing tetranucleotide frequency space to the metagenome binning problem. We do note, however, that our observation of OFDEG deserves further anlaysis and investigation. Unsupervised clustering revealed OFDEG related features performed better than standard tetranucleotide frequency in representing a relevant organism specific signal. Further improvement in binning accuracy is given by semi-supervised classification using OFDEG. The emphasis on a feature-driven, bottom-up approach to the problem of binning reveals promising avenues for future development of techniques to characterise short environmental sequences without bias toward cultivable organisms. PMID:19958473
Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays
NASA Astrophysics Data System (ADS)
Seibert, George E.
1987-10-01
This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,
High-frequency signal and noise estimates of CSR GRACE RL04
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.
2012-12-01
A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.
NASA Technical Reports Server (NTRS)
Kaufmann, D. C.
1976-01-01
The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
NASA Astrophysics Data System (ADS)
Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.
2011-06-01
It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.
Flood-frequency prediction methods for unregulated streams of Tennessee, 2000
Law, George S.; Tasker, Gary D.
2003-01-01
Up-to-date flood-frequency prediction methods for unregulated, ungaged rivers and streams of Tennessee have been developed. Prediction methods include the regional-regression method and the newer region-of-influence method. The prediction methods were developed using stream-gage records from unregulated streams draining basins having from 1 percent to about 30 percent total impervious area. These methods, however, should not be used in heavily developed or storm-sewered basins with impervious areas greater than 10 percent. The methods can be used to estimate 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence-interval floods of most unregulated rural streams in Tennessee. A computer application was developed that automates the calculation of flood frequency for unregulated, ungaged rivers and streams of Tennessee. Regional-regression equations were derived by using both single-variable and multivariable regional-regression analysis. Contributing drainage area is the explanatory variable used in the single-variable equations. Contributing drainage area, main-channel slope, and a climate factor are the explanatory variables used in the multivariable equations. Deleted-residual standard error for the single-variable equations ranged from 32 to 65 percent. Deleted-residual standard error for the multivariable equations ranged from 31 to 63 percent. These equations are included in the computer application to allow easy comparison of results produced by the different methods. The region-of-influence method calculates multivariable regression equations for each ungaged site and recurrence interval using basin characteristics from 60 similar sites selected from the study area. Explanatory variables that may be used in regression equations computed by the region-of-influence method include contributing drainage area, main-channel slope, a climate factor, and a physiographic-region factor. Deleted-residual standard error for the region-of-influence method tended to be only slightly smaller than those for the regional-regression method and ranged from 27 to 62 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaelsen, Kelly; Krishnaswamy, Venkat; Pogue, Brian W.
2012-07-15
Purpose: Design optimization and phantom validation of an integrated digital breast tomosynthesis (DBT) and near-infrared spectral tomography (NIRST) system targeting improvement in sensitivity and specificity of breast cancer detection is presented. Factors affecting instrumentation design include minimization of cost, complexity, and examination time while maintaining high fidelity NIRST measurements with sufficient information to recover accurate optical property maps. Methods: Reconstructed DBT slices from eight patients with abnormal mammograms provided anatomical information for the NIRST simulations. A limited frequency domain (FD) and extensive continuous wave (CW) NIRST system was modeled. The FD components provided tissue scattering estimations used in the reconstructionmore » of the CW data. Scattering estimates were perturbed to study the effects on hemoglobin recovery. Breast mimicking agar phantoms with inclusions were imaged using the combined DBT/NIRST system for comparison with simulation results. Results: Patient simulations derived from DBT images show successful reconstruction of both normal and malignant lesions in the breast. They also demonstrate the importance of accurately quantifying tissue scattering. Specifically, 20% errors in optical scattering resulted in 22.6% or 35.1% error in quantification of total hemoglobin concentrations, depending on whether scattering was over- or underestimated, respectively. Limited frequency-domain optical signal sampling provided two regions scattering estimates (for fat and fibroglandular tissues) that led to hemoglobin concentrations that reduced the error in the tumor region by 31% relative to when a single estimate of optical scattering was used throughout the breast volume of interest. Acquiring frequency-domain data with six wavelengths instead of three did not significantly improve the hemoglobin concentration estimates. Simulation results were confirmed through experiments in two-region breast mimicking gelatin phantoms. Conclusions: Accurate characterization of scattering is necessary for quantification of hemoglobin. Based on this study, a system design is described to optimally combine breast tomosynthesis with NIRST.« less
A digital optical phase-locked loop for diode lasers based on field programmable gate array.
Xu, Zhouxiang; Zhang, Xian; Huang, Kaikai; Lu, Xuanhui
2012-09-01
We have designed and implemented a highly digital optical phase-locked loop (OPLL) for diode lasers in atom interferometry. The three parts of controlling circuit in this OPLL, including phase and frequency detector (PFD), loop filter and proportional integral derivative (PID) controller, are implemented in a single field programmable gate array chip. A structure type compatible with the model MAX9382∕MCH12140 is chosen for PFD and pipeline and parallelism technology have been adapted in PID controller. Especially, high speed clock and twisted ring counter have been integrated in the most crucial part, the loop filter. This OPLL has the narrow beat note line width below 1 Hz, residual mean-square phase error of 0.14 rad(2) and transition time of 100 μs under 10 MHz frequency step. A main innovation of this design is the completely digitalization of the whole controlling circuit in OPLL for diode lasers.
A digital optical phase-locked loop for diode lasers based on field programmable gate array
NASA Astrophysics Data System (ADS)
Xu, Zhouxiang; Zhang, Xian; Huang, Kaikai; Lu, Xuanhui
2012-09-01
We have designed and implemented a highly digital optical phase-locked loop (OPLL) for diode lasers in atom interferometry. The three parts of controlling circuit in this OPLL, including phase and frequency detector (PFD), loop filter and proportional integral derivative (PID) controller, are implemented in a single field programmable gate array chip. A structure type compatible with the model MAX9382/MCH12140 is chosen for PFD and pipeline and parallelism technology have been adapted in PID controller. Especially, high speed clock and twisted ring counter have been integrated in the most crucial part, the loop filter. This OPLL has the narrow beat note line width below 1 Hz, residual mean-square phase error of 0.14 rad2 and transition time of 100 μs under 10 MHz frequency step. A main innovation of this design is the completely digitalization of the whole controlling circuit in OPLL for diode lasers.
Autonomous Pointing Control of a Large Satellite Antenna Subject to Parametric Uncertainty
Wu, Shunan; Liu, Yufei; Radice, Gianmarco; Tan, Shujun
2017-01-01
With the development of satellite mobile communications, large antennas are now widely used. The precise pointing of the antenna’s optical axis is essential for many space missions. This paper addresses the challenging problem of high-precision autonomous pointing control of a large satellite antenna. The pointing dynamics are firstly proposed. The proportional–derivative feedback and structural filter to perform pointing maneuvers and suppress antenna vibrations are then presented. An adaptive controller to estimate actual system frequencies in the presence of modal parameters uncertainty is proposed. In order to reduce periodic errors, the modified controllers, which include the proposed adaptive controller and an active disturbance rejection filter, are then developed. The system stability and robustness are analyzed and discussed in the frequency domain. Numerical results are finally provided, and the results have demonstrated that the proposed controllers have good autonomy and robustness. PMID:28287450
NASA Technical Reports Server (NTRS)
Davarian, F.
1994-01-01
The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.
Lepoittevin, Camille; Frigerio, Jean-Marc; Garnier-Géré, Pauline; Salin, Franck; Cervera, María-Teresa; Vornam, Barbara; Harvengt, Luc; Plomion, Christophe
2010-01-01
Background There is considerable interest in the high-throughput discovery and genotyping of single nucleotide polymorphisms (SNPs) to accelerate genetic mapping and enable association studies. This study provides an assessment of EST-derived and resequencing-derived SNP quality in maritime pine (Pinus pinaster Ait.), a conifer characterized by a huge genome size (∼23.8 Gb/C). Methodology/Principal Findings A 384-SNPs GoldenGate genotyping array was built from i/ 184 SNPs originally detected in a set of 40 re-sequenced candidate genes (in vitro SNPs), chosen on the basis of functionality scores, presence of neighboring polymorphisms, minor allele frequencies and linkage disequilibrium and ii/ 200 SNPs screened from ESTs (in silico SNPs) selected based on the number of ESTs used for SNP detection, the SNP minor allele frequency and the quality of SNP flanking sequences. The global success rate of the assay was 66.9%, and a conversion rate (considering only polymorphic SNPs) of 51% was achieved. In vitro SNPs showed significantly higher genotyping-success and conversion rates than in silico SNPs (+11.5% and +18.5%, respectively). The reproducibility was 100%, and the genotyping error rate very low (0.54%, dropping down to 0.06% when removing four SNPs showing elevated error rates). Conclusions/Significance This study demonstrates that ESTs provide a resource for SNP identification in non-model species, which do not require any additional bench work and little bio-informatics analysis. However, the time and cost benefits of in silico SNPs are counterbalanced by a lower conversion rate than in vitro SNPs. This drawback is acceptable for population-based experiments, but could be dramatic in experiments involving samples from narrow genetic backgrounds. In addition, we showed that both the visual inspection of genotyping clusters and the estimation of a per SNP error rate should help identify markers that are not suitable to the GoldenGate technology in species characterized by a large and complex genome. PMID:20543950
Circular Probable Error for Circular and Noncircular Gaussian Impacts
2012-09-01
1M simulated impacts ph(k)=mean(imp(:,1).^2+imp(:,2).^2<=CEP^2); % hit frequency on CEP end phit (j)=mean(ph...avg 100 hit frequencies to “incr n” end % GRAPHICS plot(i, phit ,’r-’); % error exponent versus Ph estimate
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Airplane wing vibrations due to atmospheric turbulence
NASA Technical Reports Server (NTRS)
Pastel, R. L.; Caruthers, J. E.; Frost, W.
1981-01-01
The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.
Personal protective equipment for the Ebola virus disease: A comparison of 2 training programs.
Casalino, Enrique; Astocondor, Eugenio; Sanchez, Juan Carlos; Díaz-Santana, David Enrique; Del Aguila, Carlos; Carrillo, Juan Pablo
2015-12-01
Personal protective equipment (PPE) for preventing Ebola virus disease (EVD) includes basic PPE (B-PPE) and enhanced PPE (E-PPE). Our aim was to compare conventional training programs (CTPs) and reinforced training programs (RTPs) on the use of B-PPE and E-PPE. Four groups were created, designated CTP-B, CTP-E, RTP-B, and RTP-E. All groups received the same theoretical training, followed by 3 practical training sessions. A total of 120 students were included (30 per group). In all 4 groups, the frequency and number of total errors and critical errors decreased significantly over the course of the training sessions (P < .01). The RTP was associated with a greater reduction in the number of total errors and critical errors (P < .0001). During the third training session, we noted an error frequency of 7%-43%, a critical error frequency of 3%-40%, 0.3-1.5 total errors, and 0.1-0.8 critical errors per student. The B-PPE groups had the fewest errors and critical errors (P < .0001). Our results indicate that both training methods improved the student's proficiency, that B-PPE appears to be easier to use than E-PPE, that the RTP achieved better proficiency for both PPE types, and that a number of students are still potentially at risk for EVD contamination despite the improvements observed during the training. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi
2018-06-13
The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
A digital frequency stabilization system of external cavity diode laser based on LabVIEW FPGA
NASA Astrophysics Data System (ADS)
Liu, Zhuohuan; Hu, Zhaohui; Qi, Lu; Wang, Tao
2015-10-01
Frequency stabilization for external cavity diode laser has played an important role in physics research. Many laser frequency locking solutions have been proposed by researchers. Traditionally, the locking process was accomplished by analog system, which has fast feedback control response speed. However, analog system is susceptible to the effects of environment. In order to improve the automation level and reliability of the frequency stabilization system, we take a grating-feedback external cavity diode laser as the laser source and set up a digital frequency stabilization system based on National Instrument's FPGA (NI FPGA). The system consists of a saturated absorption frequency stabilization of beam path, a differential photoelectric detector, a NI FPGA board and a host computer. Many functions, such as piezoelectric transducer (PZT) sweeping, atomic saturation absorption signal acquisition, signal peak identification, error signal obtaining and laser PZT voltage feedback controlling, are totally completed by LabVIEW FPGA program. Compared with the analog system, the system built by the logic gate circuits, performs stable and reliable. User interface programmed by LabVIEW is friendly. Besides, benefited from the characteristics of reconfiguration, the LabVIEW program is good at transplanting in other NI FPGA boards. Most of all, the system periodically checks the error signal. Once the abnormal error signal is detected, FPGA will restart frequency stabilization process without manual control. Through detecting the fluctuation of error signal of the atomic saturation absorption spectrum line in the frequency locking state, we can infer that the laser frequency stability can reach 1MHz.
Errors from approximation of ODE systems with reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
2016-12-30
This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.
NASA Astrophysics Data System (ADS)
Prasitmeeboon, Pitcha
Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
Characterization of Errors Inherent in System EMP Vulnerability Assessment Programs,
1980-10-01
Patriot system. * B-i aircraft. * E-3A airborne warning and control system aircraft. * PRC-77 radio. * Lance missile system. * Safeguard ABM system...carefully or the offset will create large frequency domain error. Frequency-tying, too, can improve f-domain data. Of the various recording sytems studied
Weinman, J A
1988-10-01
A simulated analysis is presented that shows that returns from a single-frequency space-borne lidar can be combined with data from conventional visible satellite imagery to yield profiles of aerosol extinction coefficients and the wind speed at the ocean surface. The optical thickness of the aerosols in the atmosphere can be derived from visible imagery. That measurement of the total optical thickness can constrain the solution to the lidar equation to yield a robust estimate of the extinction profile. The specular reflection of the lidar beam from the ocean can be used to determine the wind speed at the sea surface once the transmission of the atmosphere is known. The impact on the retrieved aerosol profiles and surface wind speed produced by errors in the input parameters and noise in the lidar measurements is also considered.
Note: Design and capability verification of fillet triangle flexible support
NASA Astrophysics Data System (ADS)
Wang, Tao; San, Xiao-Gang; Gao, Shi-Jie; Wang, Jing; Ni, Ying-Xue; Sang, Zhi-Xin
2017-12-01
By increasing the section thickness of a triangular flexible hinge, this study focuses on optimal selection of parameters of fillet triangle flexible hinges and flexible support. Based on Castigliano's second theorem, the flexibility expression of the fillet triangle flexible hinge was derived. Then, the case design is performed, and the comparison of three types of flexible hinges with this type of flexible hinge was carried out. The finite element models of fillet triangle flexible hinges and flexible support were built, and then the simulation results of performance parameters were calculated. Finally, the experiment platform was established to validate analysis results. The maximum error is less than 8%, which verifies the accuracy of the simulation process and equations derived; also the fundamental frequency fits the requirements of the system. The fillet triangle flexible hinge is proved to have the advantages of high precision and low flexibility.
Note: Design and capability verification of fillet triangle flexible support.
Wang, Tao; San, Xiao-Gang; Gao, Shi-Jie; Wang, Jing; Ni, Ying-Xue; Sang, Zhi-Xin
2017-12-01
By increasing the section thickness of a triangular flexible hinge, this study focuses on optimal selection of parameters of fillet triangle flexible hinges and flexible support. Based on Castigliano's second theorem, the flexibility expression of the fillet triangle flexible hinge was derived. Then, the case design is performed, and the comparison of three types of flexible hinges with this type of flexible hinge was carried out. The finite element models of fillet triangle flexible hinges and flexible support were built, and then the simulation results of performance parameters were calculated. Finally, the experiment platform was established to validate analysis results. The maximum error is less than 8%, which verifies the accuracy of the simulation process and equations derived; also the fundamental frequency fits the requirements of the system. The fillet triangle flexible hinge is proved to have the advantages of high precision and low flexibility.
Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael
2014-04-01
We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.
He, Wangli; Qian, Feng; Han, Qing-Long; Cao, Jinde
2012-10-01
This paper investigates the problem of master-slave synchronization of two delayed Lur'e systems in the presence of parameter mismatches. First, by analyzing the corresponding synchronization error system, synchronization with an error level, which is referred to as quasi-synchronization, is established. Some delay-dependent quasi-synchronization criteria are derived. An estimation of the synchronization error bound is given, and an explicit expression of error levels is obtained. Second, sufficient conditions on the existence of feedback controllers under a predetermined error level are provided. The controller gains are obtained by solving a set of linear matrix inequalities. Finally, a delayed Chua's circuit is chosen to illustrate the effectiveness of the derived results.
NASA Astrophysics Data System (ADS)
Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar
2012-12-01
Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.
Hofzumahaus, A; Kraus, A; Müller, M
1999-07-20
A spectroradiometer has been developed for direct measurement of the solar actinic UV flux (scalar intensity) and determination of photolysis frequencies in the atmosphere. The instrument is based on a scanning double monochromator with an entrance optic that exhibits an isotropic angular response over a solid angle of 2pi sr. Actinic flux spectra are measured at a resolution of 1 nm across a range of 280-420 nm, which is relevant for most tropospheric photolysis processes. The photolysis frequencies are derived from the measured radiation spectra by use of published absorption cross sections and quantum yields. The advantage of this technique compared with the traditional chemical actinometry is its versatility. It is possible to determine the photolysis frequency for any photochemical reaction of interest provided that the respective molecular photodissociation parameters are known and the absorption cross section falls within a wavelength range that is accessible by the spectroradiometer. The instrument and the calibration procedures are described in detail, and problems specific to measurement of the actinic radiation are discussed. An error analysis is presented together with a discussion of the spectral requirements of the instrument for accurate measurements of important tropospheric photolysis frequencies (J(O(1))(D), J(NO(2)), J(HCHO)). An example of measurements from previous atmospheric chemistry field campaigns are presented and discussed.
Phase-locking and coherent power combining of broadband linearly chirped optical waves.
Satyan, Naresh; Vasilyev, Arseny; Rakuljic, George; White, Jeffrey O; Yariv, Amnon
2012-11-05
We propose, analyze and demonstrate the optoelectronic phase-locking of optical waves whose frequencies are chirped continuously and rapidly with time. The optical waves are derived from a common optoelectronic swept-frequency laser based on a semiconductor laser in a negative feedback loop, with a precisely linear frequency chirp of 400 GHz in 2 ms. In contrast to monochromatic waves, a differential delay between two linearly chirped optical waves results in a mutual frequency difference, and an acoustooptic frequency shifter is therefore used to phase-lock the two waves. We demonstrate and characterize homodyne and heterodyne optical phase-locked loops with rapidly chirped waves, and show the ability to precisely control the phase of the chirped optical waveform using a digital electronic oscillator. A loop bandwidth of ~ 60 kHz, and a residual phase error variance of < 0.01 rad(2) between the chirped waves is obtained. Further, we demonstrate the simultaneous phase-locking of two optical paths to a common master waveform, and the ability to electronically control the resultant two-element optical phased array. The results of this work enable coherent power combining of high-power fiber amplifiers-where a rapidly chirping seed laser reduces stimulated Brillouin scattering-and electronic beam steering of chirped optical waves.
Magnitude and frequency of floods in the United States. Part 13. Snake River basin
Thomas, C.A.; Broom, H.C.; Cummans, J.E.
1963-01-01
The magnitude of a flood of any selected frequency up to 50 years for any site on any stream in the Snake River basin can be determined by methods outlined in this report, with some limitations. The methods are not applicable for regulated streams, for drainage basins smaller than 10 or larger than 5,000 square miles, for streams fed by large springs, or for streams that have flow characteristics materially different from the regional pattern. The magnitude of a flood for a selected frequency at a given site is determined by using the appropriate composite frequency curve and the mean annual flood for the given site. The mean annual flood is computed from either a formula or a nomograph in which drainage area, mean annual precipitation, and a geographic factor are used as independent variables. The standard error of estimate for the computation of mean annual floods is plus 17 percent and minus 15 percent.Nine flood-frequency regions (A-I) are defined. In all except regions B and I, frequency relations vary with the mean altitude of the basin as well as with the geographic location; therefore, families of curves are required for 7 of the 9 flood-frequency regions.The report includes a brief description of the physiography and climate of the Snake River basin to explain the reason for the large variation in mean annual floods, which range from zero to about 27 cubic feet per second per square mile.Composite frequency curves and formulas for computing mean annual floods are based on all suitable flood data collected in the Snake River basin. Tables show the data used to derive the formula. Following the analysis of data are station descriptions and lists of peak stages and discharges for 295 gaging stations at which 5 or more years of annual flood records were collected pr'or to Sept. 30, 1957. Many flood peak data are not usable in defining the frequency curves and deriving the formula because of large diversions and regulation upstream from the gaging stations.
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
Analyzing Effect of System Inertia on Grid Frequency Forecasting Usnig Two Stage Neuro-Fuzzy System
NASA Astrophysics Data System (ADS)
Chourey, Divyansh R.; Gupta, Himanshu; Kumar, Amit; Kumar, Jitesh; Kumar, Anand; Mishra, Anup
2018-04-01
Frequency forecasting is an important aspect of power system operation. The system frequency varies with load-generation imbalance. Frequency variation depends upon various parameters including system inertia. System inertia determines the rate of fall of frequency after the disturbance in the grid. Though, inertia of the system is not considered while forecasting the frequency of power system during planning and operation. This leads to significant errors in forecasting. In this paper, the effect of inertia on frequency forecasting is analysed for a particular grid system. In this paper, a parameter equivalent to system inertia is introduced. This parameter is used to forecast the frequency of a typical power grid for any instant of time. The system gives appreciable result with reduced error.
Modeling methodology for MLS range navigation system errors using flight test data
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.
NASA Astrophysics Data System (ADS)
Carnes, Michael R.; Mitchell, Jim L.; de Witt, P. Webb
1990-10-01
Synthetic temperature profiles are computed from altimeter-derived sea surface heights in the Gulf Stream region. The required relationships between surface height (dynamic height at the surface relative to 1000 dbar) and subsurface temperature are provided from regression relationships between dynamic height and amplitudes of empirical orthogonal functions (EOFs) of the vertical structure of temperature derived by de Witt (1987). Relationships were derived for each month of the year from historical temperature and salinity profiles from the region surrounding the Gulf Stream northeast of Cape Hatteras. Sea surface heights are derived using two different geoid estimates, the feature-modeled geoid and the air-dropped expendable bathythermograph (AXBT) geoid, both described by Carnes et al. (1990). The accuracy of the synthetic profiles is assessed by comparison to 21 AXBT profile sections which were taken during three surveys along 12 Geosat ERM ground tracks nearly contemporaneously with Geosat overflights. The primary error statistic considered is the root-mean-square (rms) difference between AXBT and synthetic isotherm depths. The two sources of error are the EOF relationship and the altimeter-derived surface heights. EOF-related and surface height-related errors in synthetic temperature isotherm depth are of comparable magnitude; each translates into about a 60-m rms isotherm depth error, or a combined 80 m to 90 m error for isotherms in the permanent thermocline. EOF-related errors are responsible for the absence of the near-surface warm core of the Gulf Stream and for the reduced volume of Eighteen Degree Water in the upper few hundred meters of (apparently older) cold-core rings in the synthetic profiles. The overall rms difference between surface heights derived from the altimeter and those computed from AXBT profiles is 0.15 dyn m when the feature-modeled geoid is used and 0.19 dyn m when the AXBT geoid is used; the portion attributable to altimeter-derived surface height errors alone is 0.03 dyn m less for each. In most cases, the deeper structure of the Gulf Stream and eddies is reproduced well by vertical sections of synthetic temperature, with largest errors typically in regions of high horizontal gradient such as across rings and the Gulf Stream front.
Sub-nanometer periodic nonlinearity error in absolute distance interferometers
NASA Astrophysics Data System (ADS)
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.
A method of determining attitude from magnetometer data only
NASA Technical Reports Server (NTRS)
Natanson, G. A.; Mclaughlin, S. F.; Nicklas, R. C.
1990-01-01
Presented here is a new algorithm to determine attitude using only magnetometer data under the following conditions: (1) internal torques are known and (2) external torques are negligible. Torque-free rotation of a spacecraft in thruster firing acquisition phase and its magnetic despin in the B-dot mode give typical examples of such situations. A simple analytical formula has been derived in the limiting case of a spacecraft rotating with constant angular velocity. The formula has been tested using low-frequency telemetry data for the Earth Radiation Budget Satellite (ERBS) under normal conditions. Observed small oscillation of body-fixed components of the angular velocity vector near their mean values result in relatively minor errors of approximately 5 degrees. More significant errors come from processing digital magnetometer data. Higher resolution of digitized magnetometer measurements would significantly improve the accuracy of this deterministic scheme. Tests of the general version of the developed algorithm for a free-rotating spacecraft and for the B-dot mode are in progress.
Cryo-optical testing of large aspheric reflectors operating in the sub mm range
NASA Astrophysics Data System (ADS)
Roose, S.; Houbrechts, Y.; Mazzoli, A.; Ninane, N.; Stockman, Y.; Daddato, R.; Kirschner, V.; Venacio, L.; de Chambure, D.
2006-02-01
The cryo-optical testing of the PLANCK primary reflector (elliptical off-axis CFRP reflector of 1550 mm x 1890 mm) is one of the major issue in the payload development program. It is requested to measure the changes of the Surface Figure Error (SFE) with respect to the best ellipsoid, between 293 K and 50 K, with a 1 μm RMS accuracy. To achieve this, Infra Red interferometry has been used and a dedicated thermo mechanical set-up has been constructed. This paper summarises the test activities, the test methods and results on the PLANCK Primary Reflector - Flight Model (PRFM) achieved in FOCAL 6.5 at Centre Spatial de Liege (CSL). Here, the Wave Front Error (WFE) will be considered, the SFE can be derived from the WFE measurement. After a brief introduction, the first part deals with the general test description. The thermo-elastic deformations will be addressed: the surface deformation in the medium frequency range (spatial wavelength down to 60 mm) and core-cell dimpling.
Power Spectral Density Specification and Analysis of Large Optical Surfaces
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2009-01-01
The 2-dimensional Power Spectral Density (PSD) can be used to characterize the mid- and the high-spatial frequency components of the surface height errors of an optical surface. We found it necessary to have a complete, easy-to-use approach for specifying and evaluating the PSD characteristics of large optical surfaces, an approach that allows one to specify the surface quality of a large optical surface based on simulated results using a PSD function and to evaluate the measured surface profile data of the same optic in comparison with those predicted by the simulations during the specification-derivation process. This paper provides a complete mathematical description of PSD error, and proposes a new approach in which a 2-dimentional (2D) PSD is converted into a 1-dimentional (1D) one by azimuthally averaging the 2D-PSD. The 1D-PSD calculated this way has the same unit and the same profile as the original PSD function, thus allows one to compare the two with each other directly.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.
Carraro, Paolo; Zago, Tatiana; Plebani, Mario
2012-03-01
Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.
Optimal waveforms design for ultra-wideband impulse radio sensors.
Li, Bin; Zhou, Zheng; Zou, Weixia; Li, Dejian; Zhao, Chong
2010-01-01
Ultra-wideband impulse radio (UWB-IR) sensors should comply entirely with the regulatory spectral limits for elegant coexistence. Under this premise, it is desirable for UWB pulses to improve frequency utilization to guarantee the transmission reliability. Meanwhile, orthogonal waveform division multiple-access (WDMA) is significant to mitigate mutual interferences in UWB sensor networks. Motivated by the considerations, we suggest in this paper a low complexity pulse forming technique, and its efficient implementation on DSP is investigated. The UWB pulse is derived preliminarily with the objective of minimizing the mean square error (MSE) between designed power spectrum density (PSD) and the emission mask. Subsequently, this pulse is iteratively modified until its PSD completely conforms to spectral constraints. The orthogonal restriction is then analyzed and different algorithms have been presented. Simulation demonstrates that our technique can produce UWB waveforms with frequency utilization far surpassing the other existing signals under arbitrary spectral mask conditions. Compared to other orthogonality design schemes, the designed pulses can maintain mutual orthogonality without any penalty on frequency utilization, and hence, are much superior in a WDMA network, especially with synchronization deviations.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Optimal Waveforms Design for Ultra-Wideband Impulse Radio Sensors
Li, Bin; Zhou, Zheng; Zou, Weixia; Li, Dejian; Zhao, Chong
2010-01-01
Ultra-wideband impulse radio (UWB-IR) sensors should comply entirely with the regulatory spectral limits for elegant coexistence. Under this premise, it is desirable for UWB pulses to improve frequency utilization to guarantee the transmission reliability. Meanwhile, orthogonal waveform division multiple-access (WDMA) is significant to mitigate mutual interferences in UWB sensor networks. Motivated by the considerations, we suggest in this paper a low complexity pulse forming technique, and its efficient implementation on DSP is investigated. The UWB pulse is derived preliminarily with the objective of minimizing the mean square error (MSE) between designed power spectrum density (PSD) and the emission mask. Subsequently, this pulse is iteratively modified until its PSD completely conforms to spectral constraints. The orthogonal restriction is then analyzed and different algorithms have been presented. Simulation demonstrates that our technique can produce UWB waveforms with frequency utilization far surpassing the other existing signals under arbitrary spectral mask conditions. Compared to other orthogonality design schemes, the designed pulses can maintain mutual orthogonality without any penalty on frequency utilization, and hence, are much superior in a WDMA network, especially with synchronization deviations. PMID:22163511
Frequency spectrum analyzer with phase-lock
Boland, Thomas J.
1984-01-01
A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.
Time synchronization of a frequency-hopped MFSK communication system
NASA Technical Reports Server (NTRS)
Simon, M. K.; Polydoros, A.; Huth, G. K.
1981-01-01
In a frequency-hopped (FH) multiple-frequency-shift-keyed (MFSK) communication system, frequency hopping causes the necessary frequency transitions for time synchronization estimation rather than the data sequence as in the conventional (nonfrequency-hopped) system. Making use of this observation, this paper presents a fine synchronization (i.e., time errors of less than a hop duration) technique for estimation of FH timing. The performance degradation due to imperfect FH time synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of hops used in the FH timing estimate.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Evaluation of causes and frequency of medication errors during information technology downtime.
Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F
2009-06-15
The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.
NASA Astrophysics Data System (ADS)
Inoue, S.; Shiraishi, J.; Takechi, M.; Matsunaga, G.; Isayama, A.; Hayashi, N.; Ide, S.
2017-11-01
An active stabilization effect of a rotating control field against an error field penetration is numerically studied. We have developed a resistive magnetohydrodynamic code ‘AEOLUS-IT’, which can simulate plasma responses to rotating/static external magnetic field. Adopting non-uniform flux coordinates system, the AEOLUS-IT simulation can employ high magnetic Reynolds number condition relevant to present tokamaks. By AEOLUS-IT, we successfully clarified the stabilization mechanism of the control field against the error field penetration. Physical processes of a plasma rotation drive via the control field are demonstrated by the nonlinear simulation, which reveals that the rotation amplitude at a resonant surface is not a monotonic function of the control field frequency, but has an extremum. Consequently, two ‘bifurcated’ frequency ranges of the control field are found for the stabilization of the error field penetration.
NASA Astrophysics Data System (ADS)
Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.
2017-11-01
Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.
A compact ADPLL based on symmetrical binary frequency searching with the same circuit
NASA Astrophysics Data System (ADS)
Li, Hangbiao; Zhang, Bo; Luo, Ping; Liao, Pengfei; Liu, Junjie; Li, Zhaoji
2015-03-01
A compact all-digital phase-locked loop (C-ADPLL) based on symmetrical binary frequency searching (BFS) with the same circuit is presented in this paper. The minimising relative frequency variation error Δη (MFE) rule is derived as guidance of design and is used to weigh the accuracy of the digitally controlled oscillator (DCO) clock frequency. The symmetrical BFS is used in the coarse-tuning process and the fine-tuning process of DCO clock frequency to achieve the minimum Δη of the locked DCO clock, which simplifies the circuit architecture and saves the die area. The C-ADPLL is implemented in a 0.13 μm one-poly-eight-metal (1P8M) CMOS process and the on-chip area is only 0.043 mm2, which is much smaller. The measurement results show that the peak-to-peak (Pk-Pk) jitter and the root-mean-square jitter of the DCO clock frequency are 270 ps at 72.3 MHz and 42 ps at 79.4 MHz, respectively, while the power consumption of the proposed ADPLL is only 2.7 mW (at 115.8 MHz) with a 1.2 V power supply. The measured Δη is not more than 1.14%. Compared with other ADPLLs, the proposed C-ADPLL has simpler architecture, smaller size and lower Pk-Pk jitter.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Button, D. K.; Schut, Frits; Quang, Pham; Martin, Ravonna; Robertson, Betsy R.
1993-01-01
Dilution culture, a method for growing the typical small bacteria from natural aquatic assemblages, has been developed. Each of 11 experimental trials of the technique was successful. Populations are measured, diluted to a small and known number of cells, inoculated into unamended sterilized seawater, and examined three times for the presence of 104 or more cells per ml over a 9-week interval. Mean viability for assemblage members is obtained from the frequency of growth, and many of the cultures produced are pure. Statistical formulations for determining viability and the frequency of pure culture production are derived. Formulations for associated errors are derived as well. Computer simulations of experiments agreed with computed values within the expected error, which verified the formulations. These led to strategies for optimizing viability determinations and pure culture production. Viabilities were usually between 2 and 60% and decreased with >5 mg of amino acids per liter as carbon. In view of difficulties in growing marine oligobacteria, these high values are noteworthy. Significant differences in population characteristics during growth, observed by high-resolution flow cytometry, suggested substantial population diversity. Growth of total populations as well as of cytometry-resolved subpopulations sometimes were truncated at levels of near 104 cells per ml, showing that viable cells could escape detection. Viability is therefore defined as the ability to grow to that population; true viabilities could be even higher. Doubling times, based on whole populations as well as individual subpopulations, were in the 1-day to 1-week range. Data were examined for changes in viability with dilution suggesting cell-cell interactions, but none could be confirmed. The frequency of pure culture production can be adjusted by inoculum size if the viability is known. These apparently pure cultures produced retained the size and apparent DNA-content characteristic of the bulk of the organisms in the parent seawater. Three cultures are now available, two of which have been carried for 3 years. The method is thus seen as a useful step for improving our understanding of typical aquatic organisms. PMID:16348896
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz
We present the results of the estimation of parameters with LISA for nearly monochromatic gravitational waves in the low and high frequency regimes for the time-delay interferometry response. Angular resolution of the detector and the estimation errors of the signal's parameters in the high frequency regimes are calculated as functions of the position in the sky and as functions of the frequency. For the long-wavelength domain we give compact formulas for the estimation errors valid on a wide range of the parameter space.
Errors in finite-difference computations on curvilinear coordinate systems
NASA Technical Reports Server (NTRS)
Mastin, C. W.; Thompson, J. F.
1980-01-01
Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.
NASA Astrophysics Data System (ADS)
de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.
2008-08-01
This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
Partial null astigmatism-compensated interferometry for a concave freeform Zernike mirror
NASA Astrophysics Data System (ADS)
Dou, Yimeng; Yuan, Qun; Gao, Zhishan; Yin, Huimin; Chen, Lu; Yao, Yanxia; Cheng, Jinlong
2018-06-01
Partial null interferometry without using any null optics is proposed to measure a concave freeform Zernike mirror. Oblique incidence on the freeform mirror is used to compensate for astigmatism as the main component in its figure, and to constrain the divergence of the test beam as well. The phase demodulated from the partial nulled interferograms is divided into low-frequency phase and high-frequency phase by Zernike polynomial fitting. The low-frequency surface figure error of the freeform mirror represented by the coefficients of Zernike polynomials is reconstructed from the low-frequency phase, applying the reverse optimization reconstruction technology in the accurate model of the interferometric system. The high-frequency surface figure error of the freeform mirror is retrieved from the high-frequency phase adopting back propagating technology, according to the updated model in which the low-frequency surface figure error has been superimposed on the sag of the freeform mirror. Simulations verified that this method is capable of testing a wide variety of astigmatism-dominated freeform mirrors due to the high dynamic range. The experimental result using our proposed method for a concave freeform Zernike mirror is consistent with the null test result employing the computer-generated hologram.
Time synchronization of new-generation BDS satellites using inter-satellite link measurements
NASA Astrophysics Data System (ADS)
Pan, Junyang; Hu, Xiaogong; Zhou, Shanshi; Tang, Chengpan; Guo, Rui; Zhu, Lingfeng; Tang, Guifeng; Hu, Guangming
2018-01-01
Autonomous satellite navigation is based on the ability of a Global Navigation Satellite System (GNSS), such as Beidou, to estimate orbits and clock parameters onboard satellites using Inter-Satellite Link (ISL) measurements instead of tracking data from a ground monitoring network. This paper focuses on the time synchronization of new-generation Beidou Navigation Satellite System (BDS) satellites equipped with an ISL payload. Two modes of Ka-band ISL measurements, Time Division Multiple Access (TDMA) mode and the continuous link mode, were used onboard these BDS satellites. Using a mathematical formulation for each measurement mode along with a derivation of the satellite clock offsets, geometric ranges from the dual one-way measurements were introduced. Then, pseudoranges and clock offsets were evaluated for the new-generation BDS satellites. The evaluation shows that the ranging accuracies of TDMA ISL and the continuous link are approximately 4 cm and 1 cm (root mean square, RMS), respectively. Both lead to ISL clock offset residuals of less than 0.3 ns (RMS). For further validation, time synchronization between these satellites to a ground control station keeping the systematic time in BDT was conducted using L-band Two-way Satellite Time Frequency Transfer (TWSTFT). System errors in the ISL measurements were calibrated by comparing the derived clock offsets with the TWSTFT. The standard deviations of the estimated ISL system errors are less than 0.3 ns, and the calibrated ISL clock parameters are consistent with that of the L-band TWSTFT. For the regional BDS network, the addition of ISL measurements for medium orbit (MEO) BDS satellites increased the clock tracking coverage by more than 40% for each orbital revolution. As a result, the clock predicting error for the satellite M1S was improved from 3.59 to 0.86 ns (RMS), and the predicting error of the satellite M2S was improved from 1.94 to 0.57 ns (RMS), which is a significant improvement by a factor of 3-4.
A new method of hybrid frequency hopping signals selection and blind parameter estimation
NASA Astrophysics Data System (ADS)
Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian
2018-04-01
Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.
When ottoman is easier than chair: an inverse frequency effect in jargon aphasia.
Marshall, J; Pring, T; Chiat, S; Robson, J
2001-02-01
This paper presents evidence of an inverse frequency effect in jargon aphasia. The subject (JP) showed a pre-disposition for low frequency word production on a range of tasks, including picture naming, sentence completion and naming in categories. Her real word errors were also striking, in that these tended to be lower in frequency than the target. Reading data suggested that the inverse frequency effect was present only when production was semantically mediated. It was therefore hypothesised that the effect was at least partly due to the semantic characteristics of low frequency items. Some support for this was obtained from a comprehension task showing that JP's understanding of low frequency terms, which she often produced as errors, was superior to her understanding of high frequency terms. Possible explanations for these findings are considered.
Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation
NASA Astrophysics Data System (ADS)
Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu
2016-11-01
Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
ERIC Educational Resources Information Center
Kearsley, Greg P.
This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…
Performance Errors in Weight Training and Their Correction.
ERIC Educational Resources Information Center
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
Lee, Yueh-Chang; Wang, Jen-Hung; Chiu, Cheng-Jen
2017-12-08
Several studies reported the efficacy of orthokeratology for myopia control. Somehow, there is limited publication with follow-up longer than 3 years. This study aims to research whether overnight orthokeratology influences the progression rate of the manifest refractive error of myopic children in a longer follow-up period (up to 12 years). And if changes in progression rate are found, to investigate the relationship between refractive changes and different baseline factors, including refraction error, wearing age and lens replacement frequency. In addition, this study collects long-term safety profile of overnight orthokeratology. This is a retrospective study of sixty-six school-age children who received overnight orthokeratology correction between January 1998 and December 2013. Thirty-six subjects whose baseline age and refractive error matched with those in the orthokeratology group were selected to form control group. These subjects were followed up at least for 12 months. Manifest refractions, cycloplegic refractions, uncorrected and best-corrected visual acuities, power vector of astigmatism, corneal curvature, and lens replacement frequency were obtained for analysis. Data of 203 eyes were derived from 66 orthokeratology subjects (31 males and 35 females) and 36 control subjects (22 males and 14 females) enrolled in this study. Their wearing ages ranged from 7 years to 16 years (mean ± SE, 11.72 ± 0.18 years). The follow-up time ranged from 1 year to 13 years (mean ± SE, 6.32 ± 0.15 years). At baseline, their myopia ranged from -0.5 D to -8.0 D (mean ± SE, -3.70 ± 0.12 D), and astigmatism ranged from 0 D to -3.0 D (mean ± SE, -0.55 ± 0.05 D). Comparing with control group, orthokeratology group had a significantly (p < 0.001) lower trend of refractive error change during the follow-up periods. According to the analysis results of GEE model, greater power of astigmatism was found to be associated with increased change of refractive error during follow-up years. Overnight orthokeratology was effective in slowing myopia progression over a twelve-year follow-up period and demonstrated a clinically acceptable safety profile. Initial higher astigmatism power was found to be associated with increased change of refractive error during follow-up years.
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
2017-09-17
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
Navigator alignment using radar scan
Doerry, Armin W.; Marquette, Brandeis
2016-04-05
The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Influence of modulation frequency in rubidium cell frequency standards
NASA Technical Reports Server (NTRS)
Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.
1983-01-01
The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
Single trial detection of hand poses in human ECoG using CSP based feature extraction.
Kapeller, C; Schneider, C; Kamada, K; Ogawa, H; Kunii, N; Ortner, R; Pruckl, R; Guger, C
2014-01-01
Decoding brain activity of corresponding highlevel tasks may lead to an independent and intuitively controlled Brain-Computer Interface (BCI). Most of today's BCI research focuses on analyzing the electroencephalogram (EEG) which provides only limited spatial and temporal resolution. Derived electrocorticographic (ECoG) signals allow the investigation of spatially highly focused task-related activation within the high-gamma frequency band, making the discrimination of individual finger movements or complex grasping tasks possible. Common spatial patterns (CSP) are commonly used for BCI systems and provide a powerful tool for feature optimization and dimensionality reduction. This work focused on the discrimination of (i) three complex hand movements, as well as (ii) hand movement and idle state. Two subjects S1 and S2 performed single `open', `peace' and `fist' hand poses in multiple trials. Signals in the high-gamma frequency range between 100 and 500 Hz were spatially filtered based on a CSP algorithm for (i) and (ii). Additionally, a manual feature selection approach was tested for (i). A multi-class linear discriminant analysis (LDA) showed for (i) an error rate of 13.89 % / 7.22 % and 18.42 % / 1.17 % for S1 and S2 using manually / CSP selected features, where for (ii) a two class LDA lead to a classification error of 13.39 % and 2.33 % for S1 and S2, respectively.
Austin, Jonathan P; Sundararajan, Mahesh; Vincent, Mark A; Hillier, Ian H
2009-08-14
The geometric and electronic structures of the aqua, chloro, acetato, hydroxo and carbonato complexes of U, Np and Pu in both their (VI) and (V) oxidation states, and in an aqueous environment, have been studied using density functional theory methods. We have obtained micro-solvated structures derived from molecular dynamics simulations and included the bulk solvent using a continuum model. We find that two different hydrogen bonding patterns involving the axial actinyl oxygen atoms are sometimes possible, and may give rise to different An-O bond lengths and vibrational frequencies. These alternative structures are reflected in the experimental An-O bond lengths of the aqua and carbonato complexes. The variation of the redox potential of the uranyl complexes with the different ligands has been studied using both BP86 and B3LYP functionals. The relative values for the four uranium complexes having anionic ligands are in surprisingly good agreement with experiment, although the absolute values are in error by approximately 1 eV. The absolute error for the aqua species is much less, leading to an incorrect order of the redox potentials of the aqua and chloro species.
Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S.; Liu, Zhengwen; Lv, Yi; Shi, Bingyin
2014-01-01
This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students. PMID:25329685
Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S; Liu, Zhengwen; Lv, Yi; Shi, Bingyin
2014-01-01
This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students.
Scattering effect of submarine hull on propeller non-cavitation noise
NASA Astrophysics Data System (ADS)
Wei, Yingsan; Shen, Yang; Jin, Shuanbao; Hu, Pengfei; Lan, Rensheng; Zhuang, Shuangjiang; Liu, Dezhi
2016-05-01
This paper investigates the non-cavitation noise caused by propeller running in the wake of submarine with the consideration of scattering effect caused by submarine's hull. The computation fluid dynamics (CFD) and acoustic analogy method are adopted to predict fluctuating pressure of propeller's blade and its underwater noise radiation in time domain, respectively. An effective iteration method which is derived in the time domain from the Helmholtz integral equation is used to solve multi-frequency waves scattering due to obstacles. Moreover, to minimize time interpolation caused numerical errors, the pressure and its derivative at the sound emission time is obtained by summation of Fourier series. It is noted that the time averaging algorithm is used to achieve a convergent result if the solution oscillated in the iteration process. Meanwhile, the developed iteration method is verified and applied to predict propeller noise scattered from submarine's hull. In accordance with analysis results, it is summarized that (1) the scattering effect of hull on pressure distribution pattern especially at the frequency higher than blade passing frequency (BPF) is proved according to the contour maps of sound pressure distribution of submarine's hull and typical detecting planes. (2) The scattering effect of the hull on the total pressure is observable in noise frequency spectrum of field points, where the maximum increment is up to 3 dB at BPF, 12.5 dB at 2BPF and 20.2 dB at 3BPF. (3) The pressure scattered from hull is negligible in near-field of propeller, since the scattering effect surrounding analyzed location of propeller on submarine's stern is significantly different from the surface ship. This work shows the importance of submarine's scattering effect in evaluating the propeller non-cavitation noise.
Steerable Principal Components for Space-Frequency Localized Images*
Landa, Boris; Shkolnisky, Yoel
2017-01-01
As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops
NASA Astrophysics Data System (ADS)
Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.
2017-08-01
The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
NASA Astrophysics Data System (ADS)
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback
Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching
2017-01-01
Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
Image motion compensation by area correlation and centroid tracking of solar surface features
NASA Technical Reports Server (NTRS)
Nein, M. E.; Mcintosh, W. R.; Cumings, N. P.
1983-01-01
An experimental solar correlation tracker was tested and evaluated on a ground-based solar magnetograph. Using sunspots as fixed targets, tracking error signals were derived by which the telescope image was stabilized against wind induced perturbations. Two methods of stabilization were investigated; mechanical stabilization of the image by controlled two-axes motion of an active optical element in the telescope beam, and electronic stabilization by biasing of the electron scan in the recording camera. Both approaches have demonstrated telescope stability of about 0.6 arc sec under random perturbations which can cause the unstabilized image to move up to 120 arc sec at frequencies up to 30 Hz.
Chronopoulos, D
2017-01-01
A systematic expression quantifying the wave energy skewing phenomenon as a function of the mechanical characteristics of a non-isotropic structure is derived in this study. A structure of arbitrary anisotropy, layering and geometric complexity is modelled through Finite Elements (FEs) coupled to a periodic structure wave scheme. A generic approach for efficiently computing the angular sensitivity of the wave slowness for each wave type, direction and frequency is presented. The approach does not involve any finite differentiation scheme and is therefore computationally efficient and not prone to the associated numerical errors. Copyright © 2016 Elsevier B.V. All rights reserved.
Kramers-Kronig based quality factor for shear wave propagation in soft tissue
Urban, M W; Greenleaf, J F
2009-01-01
Shear wave propagation techniques have been introduced for measuring the viscoelastic material properties of tissue, but assessing the accuracy of these measurements is difficult for in vivo measurements in tissue. We propose using the Kramers-Kronig relationships to assess the consistency and quality of the measurements of shear wave attenuation and phase velocity. In ex vivo skeletal muscle we measured the wave attenuation at different frequencies, and then applied finite bandwidth Kramers-Kronig equations to predict the phase velocities. We compared these predictions with the measured phase velocities and assessed the mean square error (MSE) as a quality factor. An algorithm was derived for computing a quality factor using the Kramers-Kronig relationships. PMID:19759409
NASA Astrophysics Data System (ADS)
Jiang, Junfeng; An, Jianchang; Liu, Kun; Ma, Chunyu; Li, Zhichen; Liu, Tiegen
2017-09-01
We propose a fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor. Using the approximately derivation method and the enveloping detection method, we successfully eliminate the asymmetry of the interference outputs and improve the processing speed. A positioning measurement experiment was carried out to verify the effectiveness of the proposed algorithm. At the sensing length of 85 km, the experimental results show that the mean positioning error is 18.9 m and the mean processing time is 116 ms. The processing speed is improved by 5 times compared to what can be achieved by using the traditional time-frequency analysis-based positioning method.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2014-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. PMID:25326311
Suzuki, Hirokazu; Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2015-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Assessing land leveling needs and performance with unmanned aerial system
NASA Astrophysics Data System (ADS)
Enciso, Juan; Jung, Jinha; Chang, Anjin; Chavez, Jose Carlos; Yeom, Junho; Landivar, Juan; Cavazos, Gabriel
2018-01-01
Land leveling is the initial step for increasing irrigation efficiencies in surface irrigation systems. The objective of this paper was to evaluate potential utilization of an unmanned aerial system (UAS) equipped with a digital camera to map ground elevations of a grower's field and compare them with field measurements. A secondary objective was to use UAS data to obtain a digital terrain model before and after land leveling. UAS data were used to generate orthomosaic images and three-dimensional (3-D) point cloud data by applying the structure for motion algorithm to the images. Ground control points (GCPs) were established around the study area, and they were surveyed using a survey grade dual-frequency GPS unit for accurate georeferencing of the geospatial data products. A digital surface model (DSM) was then generated from the 3-D point cloud data before and after laser leveling to determine the topography before and after the leveling. The UAS-derived DSM was compared with terrain elevation measurements acquired from land surveying equipment for validation. Although 0.3% error or root mean square error of 0.11 m was observed between UAS derived and ground measured ground elevation data, the results indicated that UAS could be an efficient method for determining terrain elevation with an acceptable accuracy when there are no plants on the ground, and it can be used to assess the performance of a land leveling project.
NASA Astrophysics Data System (ADS)
Baldini, Luca; Adirosi, Elisa; Roberto, Nicoletta; Vulpiani, Gianfranco; Russo, Fabio; Napolitano, Francesco
2015-04-01
Radar precipitation retrieval uses several relationships that parameterize precipitation properties (like rainfall rate and liquid water content and attenuation (in case of radars at attenuated frequencies such as those at C- and X- band) as a function of combinations of radar measurements. The uncertainty in such relations highly affects the uncertainty precipitation and attenuation estimates. A commonly used method to derive such relationships is to apply regression methods to precipitation measurements and radar observables simulated from datasets of drop size distributions (DSD) using microphysical and electromagnetic assumptions. DSD datasets are determined both by theoretical considerations (i.e. based on the assumption that the radar always samples raindrops whose sizes follow a gamma distribution) or from experimental measurements collected throughout the years by disdrometers. In principle, using long-term disdrometer measurements provide parameterizations more representative of a specific climatology. However, instrumental errors, specific of a disdrometer, can affect the results. In this study, different weather radar algorithms resulting from DSDs collected by diverse types of disdrometers, namely 2D video disdrometer, first and second generation of OTT Parsivel laser disdrometer, and Thies Clima laser disdrometer, in the area of Rome (Italy) are presented and discussed to establish at what extent dual-polarization radar algorithms derived from experimental DSD datasets are influenced by the different error structure of the different type of disdrometers used to collect the data.
NASA Astrophysics Data System (ADS)
Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.
Effects of tRNA modification on translational accuracy depend on intrinsic codon-anticodon strength.
Manickam, Nandini; Joshi, Kartikeya; Bhatt, Monika J; Farabaugh, Philip J
2016-02-29
Cellular health and growth requires protein synthesis to be both efficient to ensure sufficient production, and accurate to avoid producing defective or unstable proteins. The background of misreading error frequency by individual tRNAs is as low as 2 × 10(-6) per codon but is codon-specific with some error frequencies above 10(-3) per codon. Here we test the effect on error frequency of blocking post-transcriptional modifications of the anticodon loops of four tRNAs in Escherichia coli. We find two types of responses to removing modification. Blocking modification of tRNA(UUC)(Glu) and tRNA(QUC)(Asp) increases errors, suggesting that the modifications act at least in part to maintain accuracy. Blocking even identical modifications of tRNA(UUU)(Lys) and tRNA(QUA)(Tyr) has the opposite effect of decreasing errors. One explanation could be that the modifications play opposite roles in modulating misreading by the two classes of tRNAs. Given available evidence that modifications help preorder the anticodon to allow it to recognize the codons, however, the simpler explanation is that unmodified 'weak' tRNAs decode too inefficiently to compete against cognate tRNAs that normally decode target codons, which would reduce the frequency of misreading. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Rankin, Richard; Kotter, Dale
1994-01-01
An optical voltage reference for providing an alternative to a battery source. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function.
ERIC Educational Resources Information Center
Uehara, Soichi
This study was made to determine the most prevalent errors, areas of weakness, and their frequency in the writing of letters so that a course in business communications classes at Kapiolani Community College (Hawaii) could be prepared that would help students learn to write effectively. The 55 participating students were divided into two groups…
Space charge enhanced plasma gradient effects on satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.
1991-01-01
It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Rodgers, E. B.
1977-01-01
An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.
Comparing different models of the development of verb inflection in early child Spanish.
Aguado-Orea, Javier; Pine, Julian M
2015-01-01
How children acquire knowledge of verb inflection is a long-standing question in language acquisition research. In the present study, we test the predictions of some current constructivist and generativist accounts of the development of verb inflection by focusing on data from two Spanish-speaking children between the ages of 2;0 and 2;6. The constructivist claim that children's early knowledge of verb inflection is only partially productive is tested by comparing the average number of different inflections per verb in matched samples of child and adult speech. The generativist claim that children's early use of verb inflection is essentially error-free is tested by investigating the rate at which the children made subject-verb agreement errors in different parts of the present tense paradigm. Our results show: 1) that, although even adults' use of verb inflection in Spanish tends to look somewhat lexically restricted, both children's use of verb inflection was significantly less flexible than that of their caregivers, and 2) that, although the rate at which the two children produced subject-verb agreement errors in their speech was very low, this overall error rate hid a consistent pattern of error in which error rates were substantially higher in low frequency than in high frequency contexts, and substantially higher for low frequency than for high frequency verbs. These results undermine the claim that children's use of verb inflection is fully productive from the earliest observable stages, and are consistent with the constructivist claim that knowledge of verb inflection develops only gradually.
Handling the satellite inter-frequency biases in triple-frequency observations
NASA Astrophysics Data System (ADS)
Zhao, Lewen; Ye, Shirong; Song, Jia
2017-04-01
The new generation of GNSS satellites, including BDS, Galileo, modernized GPS, and GLONASS, transmit navigation sdata at more frequencies. Multi-frequency signals open new prospects for precise positioning, but satellite code and phase inter-frequency biases (IFB) induced by the third frequency need to be handled. Satellite code IFB can be corrected using products estimated by different strategies, the theoretical and numerical compatibility of these methods need to be proved. Furthermore, a new type of phase IFB, which changes with the relative sun-spacecraft-earth geometry, has been observed. It is necessary to investigate the cause and possible impacts of phase Time-variant IFB (TIFB). Therefore, we present systematic analysis to illustrate the relevancy between satellite clocks and phase TIFB, and compare the handling strategies of the code and phase IFB in triple-frequency positioning. First, the un-differenced L1/L2 satellite clock corrections considering the hardware delays are derived. And IFB induced by the dual-frequency satellite clocks to triple-frequency PPP model is detailed. The analysis shows that estimated satellite clocks actually contain the time-variant phase hardware delays, which can be compensated in L1/L2 ionosphere-free combinations. However, the time-variant hardware delays will lead to TIFB if the third frequency is used. Then, the methods used to correct the code and phase IFB are discussed. Standard point positioning (SPP) and precise point positioning (PPP) using BDS observations are carried out to validate the improvement of different IFB correction strategies. Experiments show that code IFB derived from DCB or geometry-free and ionosphere-free combination show an agreement of 0.3 ns for all satellites. Positioning results and error distribution with two different code IFB correcting strategies achieve similar tendency, which shows their substitutability. The original and wavelet filtered phase TIFB long-term series show significant periodical characteristic for most GEO and IGSO satellites, with the magnitude varies between - 5 cm and 5 cm. Finally, BDS L1/L3 kinematic PPP is conducted with code IFB corrected with DCB combinations, and TIFB corrected with filtered series. Results show that the IFB corrected L1/L3 PPP can achieve comparable convergence and positioning accuracy as L1/L2 combinations in static and kinematic mode.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung; Liao, Liang; Jones, Jeffrey A.; Kwiatkowski, John M.
2015-01-01
It has long been recognized that path-integrated attenuation (PIA) can be used to improve precipitation estimates from high-frequency weather radar data. One approach that provides an estimate of this quantity from airborne or spaceborne radar data is the surface reference technique (SRT), which uses measurements of the surface cross section in the presence and absence of precipitation. Measurements from the dual-frequency precipitation radar (DPR) on the Global Precipitation Measurement (GPM) satellite afford the first opportunity to test the method for spaceborne radar data at Ka band as well as for the Ku-band-Ka-band combination. The study begins by reviewing the basis of the single- and dual-frequency SRT. As the performance of the method is closely tied to the behavior of the normalized radar cross section (NRCS or sigma(0)) of the surface, the statistics of sigma(0) derived from DPR measurements are given as a function of incidence angle and frequency for ocean and land backgrounds over a 1-month period. Several independent estimates of the PIA, formed by means of different surface reference datasets, can be used to test the consistency of the method since, in the absence of error, the estimates should be identical. Along with theoretical considerations, the comparisons provide an initial assessment of the performance of the single- and dual-frequency SRT for the DPR. The study finds that the dual-frequency SRT can provide improvement in the accuracy of path attenuation estimates relative to the single-frequency method, particularly at Ku band.
An auxiliary frequency tracking system for general purpose lock-in amplifiers
NASA Astrophysics Data System (ADS)
Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu
2018-04-01
Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.
NASA Astrophysics Data System (ADS)
Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub
2017-05-01
Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.
Low speed phaselock speed control system. [for brushless dc motor
NASA Technical Reports Server (NTRS)
Fulcher, R. W.; Sudey, J. (Inventor)
1975-01-01
A motor speed control system for an electronically commutated brushless dc motor is provided which includes a phaselock loop with bidirectional torque control for locking the frequency output of a high density encoder, responsive to actual speed conditions, to a reference frequency signal, corresponding to the desired speed. The system includes a phase comparator, which produces an output in accordance with the difference in phase between the reference and encoder frequency signals, and an integrator-digital-to-analog converter unit, which converts the comparator output into an analog error signal voltage. Compensation circuitry, including a biasing means, is provided to convert the analog error signal voltage to a bidirectional error signal voltage which is utilized by an absolute value amplifier, rotational decoder, power amplifier-commutators, and an arrangement of commutation circuitry.
NASA Technical Reports Server (NTRS)
Didlake, Anthony C., Jr.; Heymsfield, Gerald M.; Tian, Lin; Guimond, Stephen R.
2015-01-01
The coplane analysis technique for mapping the three-dimensional wind field of precipitating systems is applied to the NASA High Altitude Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a dual-frequency Doppler radar system with two downward pointing and conically scanning beams. The coplane technique interpolates radar measurements to a natural coordinate frame, directly solves for two wind components, and integrates the mass continuity equation to retrieve the unobserved third wind component. This technique is tested using a model simulation of a hurricane and compared to a global optimization retrieval. The coplane method produced lower errors for the cross-track and vertical wind components, while the global optimization method produced lower errors for the along-track wind component. Cross-track and vertical wind errors were dependent upon the accuracy of the estimated boundary condition winds near the surface and at nadir, which were derived by making certain assumptions about the vertical velocity field. The coplane technique was then applied successfully to HIWRAP observations of Hurricane Ingrid (2013). Unlike the global optimization method, the coplane analysis allows for a transparent connection between the radar observations and specific analysis results. With this ability, small-scale features can be analyzed more adequately and erroneous radar measurements can be identified more easily.
The impact of modelling errors on interferometer calibration for 21 cm power spectra
NASA Astrophysics Data System (ADS)
Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline
2017-09-01
We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.
GPS-Based Precision Orbit Determination for a New Era of Altimeter Satellites: Jason-1 and ICESat
NASA Technical Reports Server (NTRS)
Luthcke, Scott B.; Rowlands, David D.; Lemoine, Frank G.; Zelensky, Nikita P.; Williams, Teresa A.
2003-01-01
Accurate positioning of the satellite center of mass is necessary in meeting an altimeter mission's science goals. The fundamental science observation is an altimetric derived topographic height. Errors in positioning the satellite's center of mass directly impact this fundamental observation. Therefore, orbit error is a critical Component in the error budget of altimeter satellites. With the launch of the Jason-1 radar altimeter (Dec. 2001) and the ICESat laser altimeter (Jan. 2003) a new era of satellite altimetry has begun. Both missions pose several challenges for precision orbit determination (POD). The Jason-1 radial orbit accuracy goal is 1 cm, while ICESat (600 km) at a much lower altitude than Jason-1 (1300 km), has a radial orbit accuracy requirement of less than 5 cm. Fortunately, Jason-1 and ICESat POD can rely on near continuous tracking data from the dual frequency codeless BlackJack GPS receiver and Satellite Laser Ranging. Analysis of current GPS-based solution performance indicates the l-cm radial orbit accuracy goal is being met for Jason-1, while radial orbit accuracy for ICESat is well below the 54x1 mission requirement. A brief overview of the GPS precision orbit determination methodology and results for both Jason-1 and ICESat are presented.
The influence of the uplink noise on the performance of satellite data transmission systems
NASA Astrophysics Data System (ADS)
Dewal, Vrinda P.
The problem of transmission of binary phase shift keying (BPSK) modulated digital data through a bandlimited nonlinear satellite channel in the presence of uplink, downlink Gaussian noise and intersymbol interface is examined. The satellite transponder is represented by a zero memory bandpass nonlinearity, with AM/AM conversion. The proposed optimum linear receiver structure consists of tapped-delay lines followed by a decision device. The linear receiver is designed to minimize the mean square error that is a function of the intersymbol interface, the uplink and the downlink noise. The minimum mean square error equalizer (MMSE) is derived using the Wiener-Kolmogorov theory. In this receiver, the decision about the transmitted signal is made by taking into account the received sequence of present sample, and the interfering past and future samples, which represent the intersymbol interference (ISI). Illustrative examples of the receiver structures are considered for the nonlinear channels with a symmetrical and asymmetrical frequency responses of the transmitter filter. The transponder nonlinearity is simulated by a polynomial using only the first and the third orders terms. A computer simulation determines the tap gain coefficients of the MMSE equalizer that adapt to the various uplink and downlink noise levels. The performance of the MMSE equalizer is evaluated in terms of an estimate of the average probability of error.
Linear quadratic stochastic control of atomic hydrogen masers.
Koppang, P; Leland, R
1999-01-01
Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.
High-precision coseismic displacement estimation with a single-frequency GPS receiver
NASA Astrophysics Data System (ADS)
Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing
2015-07-01
To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.
Weather radar equation and a receiver calibration based on a slice approach
NASA Astrophysics Data System (ADS)
Yurchak, B. S.
2012-12-01
Two circumstances are essential when exploiting radar measurement of precipitation. The first circumstance is a correct physical-mathematical model linking parameters of a rainfall microstructure with a magnitude of a return signal (the weather radar equation (WRE)). The second is a precise measurement of received power that is fitted by a calibration of radar receiver. WRE for the spatially extended geophysical target (SEGT), such as cloud or rain, has been derived based on "slice" approach [1]. In this approach, the particles located close to the wavefront of the radar illumination are assumed to produce backscatter that is mainly coherent. This approach allows the contribution of the microphysical parameters of the scattering media to the radar cross section to be more comprehensive than the model based on the incoherent approach (e.g., Probert-Jones equation (PJE)). In the particular case, when the particle number fluctuations within slices pertain the Poisson law, the WRE derived is transformed to PJE. When Poisson index (standard deviation / mean number of particles) of a slice deviates from 1, the deviation of return power estimated by PJE from the actual value varies from +8 dB to - 12 dB. In general, the backscatter depends on mean, variance and third moment of the particle size distribution function (PSDF). The incoherent approach assumes only dependence on the sixth moment of PSDF (radar reflectivity Z). Additional difference from the classical estimate can be caused by a correlation between slice field reflectivity [2]. Overall, the deviation in particle statistics of a slice from the Poisson law is one of main physical factors that contribute to errors in radar precipitation measurements based on Z-conception. One of the components of calibration error is caused by difference between processing by weather radar receiver of the calibration pulse, and actual return signal from SEGT. A receiver with non uniform amplitude-frequency response (AFR) processes these signals with the same input power but with different radio-frequency spectrums (RFS). This causes different output magnitude due to different distortion experienced while RFS passing through a receiver filter. To assess the calibration error, RFS of signals from SEGT has been studied including theoretical, experimental and simulation stages [3]. It is shown that the return signal carrier wave is phase modulated due to overlapping of replicas of RF-probing pulse reflected from SEGT's slices. The RFSs depends on the phase statistics of the carrier wave and on RFS of the probing pulse. The bandwidth of SEGT's RFS is not greater than that of the probing pulse. Typical phase correlation interval was found to be around the same as that of the probing pulse duration. Application of a long calibration signal (proportional to SEGT extension) causes the error up to -1 dB for conventional radar with matched filter. To eliminate the calibration error, a power estimate of individual return waveform should be corrected with the transformation loss coefficient calculated based on RFS and AFR parameters. To embrace with calibration the high and low frequency parts of a receiver, the calibration should be performed with a long pulse. That long pulse is composed from adjoining replicas of a probe pulse with random initial phases and having the same magnitude governed by the power of probe pulse.
Yang, Lin; Dai, Meng; Xu, Canhua; Zhang, Ge; Li, Weichen; Fu, Feng; Shi, Xuetao; Dong, Xiuzhen
2017-01-01
Frequency-difference electrical impedance tomography (fdEIT) reconstructs frequency-dependent changes of a complex impedance distribution. It has a potential application in acute stroke detection because there are significant differences in impedance spectra between stroke lesions and normal brain tissues. However, fdEIT suffers from the influences of electrode-skin contact impedance since contact impedance varies greatly with frequency. When using fdEIT to detect stroke, it is critical to know the degree of measurement errors or image artifacts caused by contact impedance. To our knowledge, no study has systematically investigated the frequency spectral properties of electrode-skin contact impedance on human head and its frequency-dependent effects on fdEIT used in stroke detection within a wide frequency band (10 Hz-1 MHz). In this study, we first measured and analyzed the frequency spectral properties of electrode-skin contact impedance on 47 human subjects' heads within 10 Hz-1 MHz. Then, we quantified the frequency-dependent effects of contact impedance on fdEIT in stroke detection in terms of the current distribution beneath the electrodes and the contact impedance imbalance between two measuring electrodes. The results showed that the contact impedance at high frequencies (>100 kHz) significantly changed the current distribution beneath the electrode, leading to nonnegligible errors in boundary voltages and artifacts in reconstructed images. The contact impedance imbalance at low frequencies (<1 kHz) also caused significant measurement errors. We conclude that the contact impedance has critical frequency-dependent influences on fdEIT and further studies on reducing such influences are necessary to improve the application of fdEIT in stroke detection.
Zhang, Ge; Li, Weichen; Fu, Feng; Shi, Xuetao; Dong, Xiuzhen
2017-01-01
Frequency-difference electrical impedance tomography (fdEIT) reconstructs frequency-dependent changes of a complex impedance distribution. It has a potential application in acute stroke detection because there are significant differences in impedance spectra between stroke lesions and normal brain tissues. However, fdEIT suffers from the influences of electrode-skin contact impedance since contact impedance varies greatly with frequency. When using fdEIT to detect stroke, it is critical to know the degree of measurement errors or image artifacts caused by contact impedance. To our knowledge, no study has systematically investigated the frequency spectral properties of electrode-skin contact impedance on human head and its frequency-dependent effects on fdEIT used in stroke detection within a wide frequency band (10 Hz-1 MHz). In this study, we first measured and analyzed the frequency spectral properties of electrode-skin contact impedance on 47 human subjects’ heads within 10 Hz-1 MHz. Then, we quantified the frequency-dependent effects of contact impedance on fdEIT in stroke detection in terms of the current distribution beneath the electrodes and the contact impedance imbalance between two measuring electrodes. The results showed that the contact impedance at high frequencies (>100 kHz) significantly changed the current distribution beneath the electrode, leading to nonnegligible errors in boundary voltages and artifacts in reconstructed images. The contact impedance imbalance at low frequencies (<1 kHz) also caused significant measurement errors. We conclude that the contact impedance has critical frequency-dependent influences on fdEIT and further studies on reducing such influences are necessary to improve the application of fdEIT in stroke detection. PMID:28107524
Microwave properties of a quiet sea
NASA Technical Reports Server (NTRS)
Stacey, J.
1985-01-01
The microwave flux responses of a quiet sea are observed at five microwave frequencies and with both horizontal and vertical polarizations at each frequency--a simultaneous 10 channel receiving system. The measurements are taken from Earth orbit with an articulating antenna. The 10 channel responses are taken simultaneously since they share a common articulating collector with a multifrequency feed. The plotted flux responses show: (1) the effects of the relative, on-axis-gain of the collecting aperture for each frequency; (2) the effects of polarization rotation in the output responses of the receive when the collecting aperture mechanically rotates about a feed that is fixed; (3) the difference between the flux magnitudes for the horizontal and vertical channels, at each of the five frequencies, and for each pointing position, over a 44 degree scan angle; and (4) the RMS value of the clutter--as reckoned over the interval of a full swath for each of the 10 channels. The clutter is derived from the standard error of estimate of the plotted swath response for each channel. The expected value of the background temperature is computed for each of the three quiet seas. The background temperature includes contributions from the cosmic background, the downwelling path, the sea surface, and the upwelling path.
Recent advances in capacitance type of blade tip clearance measurements
NASA Technical Reports Server (NTRS)
Barranger, John P.
1988-01-01
Two recent electronic advances at NASA-Lewis that meet the blade tip clearance needs of a wide class of fans, compressors, and turbines are described. The first is a frequency modulated (FM) oscillator that requires only a single low cost ultrahigh frequency operational amplifier. Its carrier frequency is 42.8 MHz when used with a 61 cm long hermetically sealed coaxial cable. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a typical rotor blade. High temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum which produces a clearance error of + or - 10 microns at a clearance of 500 microns. In the second advance, a guarded probe configuration allows a longer cable capacitance. The capacitance of the probe is part of a small time constant feedback in a high speed operational amplifier. The solution of the governing differential equation is applied to a ramp type of input. The results show an amplifier output that contains a term which is proportional to the derivative of the feedback capacitance. The capacitance is obtained by subtracting a balancing reference channel followed by an integration stage.
Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.
2013-01-01
A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
Pediatric Anesthesiology Fellows' Perception of Quality of Attending Supervision and Medical Errors.
Benzon, Hubert A; Hajduk, John; De Oliveira, Gildasio; Suresh, Santhanam; Nizamuddin, Sarah L; McCarthy, Robert; Jagannathan, Narasimhan
2018-02-01
Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States. Interestingly, fellows' perception of quality of faculty supervision was not associated with the frequency of reported errors. The current results with a narrow CI suggest the need to evaluate other potential factors that can be associated with the high frequency of reported errors by pediatric fellows (eg, fatigue, burnout). The identification of factors that lead to medical errors by pediatric anesthesiology fellows should be a main research priority to improve both trainee education and best practices of pediatric anesthesia.
De Oliveira, Gildasio S; Rahmani, Rod; Fitzgerald, Paul C; Chang, Ray; McCarthy, Robert J
2013-04-01
Poor supervision of physician trainees can be detrimental not only to resident education but also to patient care and safety. Inadequate supervision has been associated with more frequent deaths of patients under the care of junior residents. We hypothesized that residents reporting more medical errors would also report lower quality of supervision scores than the ones with lower reported medical errors. The primary objective of this study was to evaluate the association between the frequency of medical errors reported by residents and their perceived quality of faculty supervision. A cross-sectional nationwide survey was sent to 1000 residents randomly selected from anesthesiology training departments across the United States. Residents from 122 residency programs were invited to participate, the median (interquartile range) per institution was 7 (4-11). Participants were asked to complete a survey assessing demography, perceived quality of faculty supervision, and perceived causes of inadequate perceived supervision. Responses to the statements "I perform procedures for which I am not properly trained," "I make mistakes that have negative consequences for the patient," and "I have made a medication error (drug or incorrect dose) in the last year" were used to assess error rates. Average supervision scores were determined using the De Oliveira Filho et al. scale and compared among the frequency of self-reported error categories using the Kruskal-Wallis test. Six hundred four residents responded to the survey (60.4%). Forty-five (7.5%) of the respondents reported performing procedures for which they were not properly trained, 24 (4%) reported having made mistakes with negative consequences to patients, and 16 (3%) reported medication errors in the last year having occurred multiple times or often. Supervision scores were inversely correlated with the frequency of reported errors for all 3 questions evaluating errors. At a cutoff value of 3, supervision scores demonstrated an overall accuracy (area under the curve) (99% confidence interval) of 0.81 (0.73-0.86), 0.89 (0.77-0.95), and 0.93 (0.77-0.98) for predicting a response of multiple times or often to the question of performing procedures for which they were not properly trained, reported mistakes with negative consequences to patients, and reported medication errors in the last year, respectively. Anesthesiology trainees who reported a greater incidence of medical errors with negative consequences to patients and drug errors also reported lower scores for supervision by faculty. Our findings suggest that further studies of the association between supervision and patient safety are warranted. (Anesth Analg 2013;116:892-7).
A Learner Corpus-Based Study on Verb Errors of Turkish EFL Learners
ERIC Educational Resources Information Center
Can, Cem
2017-01-01
As learner corpora have presently become readily accessible, it is practicable to examine interlanguage errors and carry out error analysis (EA) on learner-generated texts. The data available in a learner corpus enable researchers to investigate authentic learner errors and their respective frequencies in terms of types and tokens as well as…
Article Errors in the English Writing of Saudi EFL Preparatory Year Students
ERIC Educational Resources Information Center
Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.
2017-01-01
This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…
The Nature of Error in Adolescent Student Writing
ERIC Educational Resources Information Center
Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang
2014-01-01
This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…
Performance of coded MFSK in a Rician fading channel. [Multiple Frequency Shift Keyed modulation
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1975-01-01
The performance of convolutional codes in conjunction with noncoherent multiple frequency shift-keyed (MFSK) modulation and Viterbi maximum likelihood decoding on a Rician fading channel is examined in detail. While the primary motivation underlying this work has been concerned with system performance on the planetary entry channel, it is expected that the results are of considerably wider interest. Particular attention is given to modeling the channel in terms of a few meaningful parameters which can be correlated closely with the results of theoretical propagation studies. Fairly general upper bounds on bit error probability performance in the presence of fading are derived and compared with simulation results using both unquantized and quantized receiver outputs. The effects of receiver quantization and channel memory are investigated and it is concluded that the coded noncoherent MFSK system offers an attractive alternative to coherent BPSK in providing reliable low data rate communications in fading channels typical of planetary entry missions.
A digital optical phase-locked loop for diode lasers based on field programmable gate array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Zhouxiang; Zhang Xian; Huang Kaikai
2012-09-15
We have designed and implemented a highly digital optical phase-locked loop (OPLL) for diode lasers in atom interferometry. The three parts of controlling circuit in this OPLL, including phase and frequency detector (PFD), loop filter and proportional integral derivative (PID) controller, are implemented in a single field programmable gate array chip. A structure type compatible with the model MAX9382/MCH12140 is chosen for PFD and pipeline and parallelism technology have been adapted in PID controller. Especially, high speed clock and twisted ring counter have been integrated in the most crucial part, the loop filter. This OPLL has the narrow beat notemore » line width below 1 Hz, residual mean-square phase error of 0.14 rad{sup 2} and transition time of 100 {mu}s under 10 MHz frequency step. A main innovation of this design is the completely digitalization of the whole controlling circuit in OPLL for diode lasers.« less
An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)
NASA Technical Reports Server (NTRS)
Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.
1990-01-01
Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
NASA Astrophysics Data System (ADS)
Sekhar, S. Chandra; Sreenivas, TV
2004-12-01
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Estimating population diversity with CatchAll
Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.
2012-01-01
Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246
Hurst, Robert B; Mayerbacher, Marinus; Gebauer, Andre; Schreiber, K Ulrich; Wells, Jon-Paul R
2017-02-01
Large ring lasers have exceeded the performance of navigational gyroscopes by several orders of magnitude and have become useful tools for geodesy. In order to apply them to tests in fundamental physics, remaining systematic errors have to be significantly reduced. We derive a modified expression for the Sagnac frequency of a square ring laser gyro under Earth rotation. The modifications include corrections for dispersion (of both the gain medium and the mirrors), for the Goos-Hänchen effect in the mirrors, and for refractive index of the gas filling the cavity. The corrections were measured and calculated for the 16 m2 Grossring laser located at the Geodetic Observatory Wettzell. The optical frequency and the free spectral range of this laser were measured, allowing unique determination of the longitudinal mode number, and measurement of the dispersion. Ultimately we find that the absolute scale factor of the gyroscope can be estimated to an accuracy of approximately 1 part in 108.
Global Distribution and Vertical Structure of Clouds Revealed by CALIPSO
NASA Astrophysics Data System (ADS)
Yi, Y.; Minnis, P.; Winker, D.; Huang, J.; Sun-Mack, S.; Ayers, K.
2007-12-01
Understanding the effects of clouds on Earth's radiation balance, especially on longwave fluxes within the atmosphere, depends on having accurate knowledge of cloud vertical location within the atmosphere. The Cloud- Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite mission provides the opportunity to measure the vertical distribution of clouds at a greater detail than ever before possible. The CALIPSO cloud layer products from June 2006 to June 2007 are analyzed to determine the occurrence frequency and thickness of clouds as functions of time, latitude, and altitude. In particular, the latitude-longitude and vertical distributions of single- and multi-layer clouds and the latitudinal movement of cloud cover with the changing seasons are examined. The seasonal variablities of cloud frequency and geometric thickness are also analyzed and compared with similar quantities derived from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) using the Clouds and the Earth's Radiant Energy System (CERES) cloud retrieval algorithms. The comparisons provide an estimate of the errors in cloud fraction, top height, and thickness incurred by passive algorithms.
On the Study of a Quadrature DCSK Modulation Scheme for Cognitive Radio
NASA Astrophysics Data System (ADS)
Quyen, Nguyen Xuan
The past decade has witnessed a boom of wireless communications which necessitate an increasing improvement of data rate, error-rate performance, bandwidth efficiency, and information security. In this work, we propose a quadrature (IQ) differential chaos-shift keying (DCSK) modulation scheme for the application in cognitive radio (CR), named CR-IQ-DCSK, which offers the above improvement. Chaotic signal is generated in frequency domain and then converted into time domain via an inverse Fourier transform. The real and imaginary components of the frequency-based chaotic signal are simultaneously used in in-phase and quadrature branches of an IQ modulator, where each branch conveys two bits by means of a DCSK-based modulation. Schemes and operating principle of the modulator and demodulator are proposed and described. Analytical BER performance for the proposed schemes over a typical multipath Rayleigh fading channel is derived and verified by numerical simulations. Results show that the proposed scheme outperforms DCSK, CDSK and performs better with the increment of the number of channel paths.
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations in behaviour at the individual level. PMID:26107643
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations in behaviour at the individual level.
TIME SIGNALS, * SYNCHRONIZATION (ELECTRONICS)), NETWORKS, FREQUENCY, STANDARDS, RADIO SIGNALS, ERRORS, VERY LOW FREQUENCY, PROPAGATION, ACCURACY, ATOMIC CLOCKS, CESIUM, RADIO STATIONS, NAVAL SHORE FACILITIES
NASA Astrophysics Data System (ADS)
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
Rankin, R.; Kotter, D.
1994-04-26
An optical voltage reference for providing an alternative to a battery source is described. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function. 2 figures.
The Accuracy of Two-Way Satellite Time Transfer Calibrations
2005-01-01
20392, USA Abstract Results from successive calibrations of Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) operational equipment at...USNO and five remote stations using portable TWSTFT equipment are analyzed for internal and external errors, finding an average random error of ±0.35...most accurate means of operational long-distance time transfer are Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) and carrier-phase GPS
Gomaa Haroun, A H; Li, Yin-Ya
2017-11-01
In the fast developing world nowadays, load frequency control (LFC) is considered to be a most significant role for providing the power supply with good quality in the power system. To deliver a reliable power, LFC system requires highly competent and intelligent control technique. Hence, in this article, a novel hybrid fuzzy logic intelligent proportional-integral-derivative (FLiPID) controller has been proposed for LFC of interconnected multi-area power systems. A four-area interconnected thermal power system incorporated with physical constraints and boiler dynamics is considered and the adjustable parameters of the FLiPID controller are optimized using particle swarm optimization (PSO) scheme employing an integral square error (ISE) criterion. The proposed method has been established to enhance the power system performances as well as to reduce the oscillations of uncertainties due to variations in the system parameters and load perturbations. The supremacy of the suggested method is demonstrated by comparing the simulation results with some recently reported heuristic methods such as fuzzy logic proportional-integral (FLPI) and intelligent proportional-integral-derivative (PID) controllers for the same electrical power system. the investigations showed that the FLiPID controller provides a better dynamic performance and outperform compared to the other approaches in terms of the settling time, and minimum undershoots of the frequency as well as tie-line power flow deviations following a perturbation, in addition to perform appropriate settlement of integral absolute error (IAE). Finally, the sensitivity analysis of the plant is inspected by varying the system parameters and operating load conditions from their nominal values. It is observed that the suggested controller based optimization algorithm is robust and perform satisfactorily with the variations in operating load condition, system parameters and load pattern. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An approach to the analysis of performance of quasi-optimum digital phase-locked loops.
NASA Technical Reports Server (NTRS)
Polk, D. R.; Gupta, S. C.
1973-01-01
An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.
Chua, S S; Tea, M H; Rahman, M H A
2009-04-01
Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.
Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.
Li, Jielin; Hassebrook, Laurence G; Guan, Chun
2003-01-01
Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.
Di Pietro, M; Schnider, A; Ptak, R
2011-10-01
Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin
2017-02-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin
2016-01-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324
Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J
2006-01-01
The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.
Palmero, David; Di Paolo, Ermindo R; Beauport, Lydie; Pannatier, André; Tolsa, Jean-François
2016-01-01
The objective of this study was to assess whether the introduction of a new preformatted medical order sheet coupled with an introductory course affected prescription quality and the frequency of errors during the prescription stage in a neonatal intensive care unit (NICU). Two-phase observational study consisting of two consecutive 4-month phases: pre-intervention (phase 0) and post-intervention (phase I) conducted in an 11-bed NICU in a Swiss university hospital. Interventions consisted of the introduction of a new preformatted medical order sheet with explicit information supplied, coupled with a staff introductory course on appropriate prescription and medication errors. The main outcomes measured were formal aspects of prescription and frequency and nature of prescription errors. Eighty-three and 81 patients were included in phase 0 and phase I, respectively. A total of 505 handwritten prescriptions in phase 0 and 525 in phase I were analysed. The rate of prescription errors decreased significantly from 28.9% in phase 0 to 13.5% in phase I (p < 0.05). Compared with phase 0, dose errors, name confusion and errors in frequency and rate of drug administration decreased in phase I, from 5.4 to 2.7% (p < 0.05), 5.9 to 0.2% (p < 0.05), 3.6 to 0.2% (p < 0.05), and 4.7 to 2.1% (p < 0.05), respectively. The rate of incomplete and ambiguous prescriptions decreased from 44.2 to 25.7 and 8.5 to 3.2% (p < 0.05), respectively. Inexpensive and simple interventions can improve the intelligibility of prescriptions and reduce medication errors. Medication errors are frequent in NICUs and prescription is one of the most critical steps. CPOE reduce prescription errors, but their implementation is not available everywhere. Preformatted medical order sheet coupled with an introductory course decrease medication errors in a NICU. Preformatted medical order sheet is an inexpensive and readily implemented alternative to CPOE.
Land surface dynamics monitoring using microwave passive satellite sensors
NASA Astrophysics Data System (ADS)
Guijarro, Lizbeth Noemi
Soil moisture, surface temperature and vegetation are variables that play an important role in our environment. There is growing demand for accurate estimation of these geophysical parameters for the research of global climate models (GCMs), weather, hydrological and flooding models, and for the application to agricultural assessment, land cover change, and a wide variety of other uses that meet the needs for the study of our environment. The different studies covered in this dissertation evaluate the capabilities and limitations of microwave passive sensors to monitor land surface dynamics. The first study evaluates the 19 GHz channel of the SSM/I instrument with a radiative transfer model and in situ datasets from the Illinois stations and the Oklahoma Mesonet to retrieve land surface temperature and surface soil moisture. The surface temperatures were retrieved with an average error of 5 K and the soil moisture with an average error of 6%. The results show that the 19 GHz channel can be used to qualitatively predict the spatial and temporal variability of surface soil moisture and surface temperature at regional scales. In the second study, in situ observations were compared with sensor observations to evaluate aspects of low and high spatial resolution at multiple frequencies with data collected from the Southern Great Plains Experiment (SGP99). The results showed that the sensitivity to soil moisture at each frequency is a function of wavelength and amount of vegetation. The results confirmed that L-band is more optimal for soil moisture, but each sensor can provide soil moisture information if the vegetation water content is low. The spatial variability of the emissivities reveals that resolution suffers considerably at higher frequencies. The third study evaluates C- and X-bands of the AMSR-E instrument. In situ datasets from the Soil Moisture Experiments (SMEX03) in South Central Georgia were utilized to validate the AMSR-E soil moisture product and to derive surface soil moisture with a radiative transfer model. The soil moisture was retrieved with an average error of 2.7% at X-band and 6.7% at C-band. The AMSR-E demonstrated its ability to successfully infer soil moisture during the SMEX03 experiment.
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less
Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models
NASA Astrophysics Data System (ADS)
Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura
2014-09-01
Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Buhay, W.M.; Simpson, S.; Thorleifson, H.; Lewis, M.; King, J.; Telka, A.; Wilkinson, Philip M.; Babb, J.; Timsic, S.; Bailey, D.
2009-01-01
A short sediment core (162 cm), covering the period AD 920-1999, was sampled from the south basin of Lake Winnipeg for a suite of multi-proxy analyses leading towards a detailed characterisation of the recent millennial lake environment and hydroclimate of southern Manitoba, Canada. Information on the frequency and duration of major dry periods in southern Manitoba, in light of the changes that are likely to occur as a result of an increasingly warming atmosphere, is of specific interest in this study. Intervals of relatively enriched lake sediment cellulose oxygen isotope values (??18Ocellulose) were found to occur from AD 1180 to 1230 (error range: AD 1104-1231 to 1160-1280), 1610-1640 (error range: AD 1571-1634 to 1603-1662), 1670-1720 (error range: AD 1643-1697 to 1692-1738) and 1750-1780 (error range: AD 1724-1766 to 1756-1794). Regional water balance, inferred from calculated Lake Winnipeg water oxygen isotope values (??18Oinf-lw), suggest that the ratio of lake evaporation to catchment input may have been 25-40% higher during these isotopically distinct periods. Associated with the enriched d??18Ocellulose intervals are some depleted carbon isotope values associated with more abundantly preserved sediment organic matter (d??13COM). These suggest reduced microbial oxidation of terrestrially derived organic matter and/or subdued lake productivity during periods of minimised input of nutrients from the catchment area. With reference to other corroborating evidence, it is suggested that the AD 1180-1230, 1610-1640, 1670-1720 and 1750-1780 intervals represent four distinctly drier periods (droughts) in southern Manitoba, Canada. Additionally, lower-magnitude and duration dry periods may have also occurred from 1320 to 1340 (error range: AD 1257-1363), 1530-1540 (error range: AD 1490-1565 to 1498-1572) and 1570-1580 (error range: AD 1531-1599 to 1539-1606). ?? 2009 John Wiley & Sons, Ltd.
Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging
NASA Astrophysics Data System (ADS)
Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ
2015-01-01
Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)
An Ensemble Method for Spelling Correction in Consumer Health Questions
Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina
2015-01-01
Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208
Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T
2016-03-01
In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jungpyo; Wright, John; Bertelli, Nicola
In this study, a reduced model of quasilinear velocity diffusion by a small Larmor radius approximation is derived to couple the Maxwell’s equations and the Fokker Planck equation self-consistently for the ion cyclotron range of frequency waves in a tokamak. The reduced model ensures the important properties of the full model by Kennel-Engelmann diffusion, such as diffusion directions, wave polarizations, and H-theorem. The kinetic energy change (Wdot ) is used to derive the reduced model diffusion coefficients for the fundamental damping (n = 1) and the second harmonic damping (n = 2) to the lowest order of the finite Larmormore » radius expansion. The quasilinear diffusion coefficients are implemented in a coupled code (TORIC-CQL3D) with the equivalent reduced model of the dielectric tensor. We also present the simulations of the ITER minority heating scenario, in which the reduced model is verified within the allowable errors from the full model results.« less
NASA Astrophysics Data System (ADS)
Ozrin, V. D.; Subbotin, M. V.; Nikitin, S. M.
2004-04-01
We have developed PLASS (Protein-Ligand Affinity Statistical Score), a pair-wise potential of mean-force for rapid estimation of the binding affinity of a ligand molecule to a protein active site. This scoring function is derived from the frequency of occurrence of atom-type pairs in crystallographic complexes taken from the Protein Data Bank (PDB). Statistical distributions are converted into distance-dependent contributions to the Gibbs free interaction energy for 10 atomic types using the Boltzmann hypothesis, with only one adjustable parameter. For a representative set of 72 protein-ligand structures, PLASS scores correlate well with the experimentally measured dissociation constants: a correlation coefficient R of 0.82 and RMS error of 2.0 kcal/mol. Such high accuracy results from our novel treatment of the volume correction term, which takes into account the inhomogeneous properties of the protein-ligand complexes. PLASS is able to rank reliably the affinity of complexes which have as much diversity as in the PDB.
Zhou, Yang; Utsunomiya, Yuri T; Xu, Lingyang; Hay, El Hamidi Abdel; Bickhart, Derek M; Sonstegard, Tad S; Van Tassell, Curtis P; Garcia, Jose Fernando; Liu, George E
2016-07-06
We compared CNV region (CNVR) results derived from 1,682 Nellore cattle with equivalent results derived from our previous analysis of Bovine HapMap samples. By comparing CNV segment frequencies between different genders and groups, we identified 9 frequent, false positive CNVRs with a total length of 0.8 Mbp that were likely caused by assembly errors. Although there was a paucity of lineage specific events, we did find one 54 kb deletion on chr5 significantly enriched in Nellore cattle. A few highly frequent CNVRs present in both datasets were detected within genomic regions containing olfactory receptor, ATP-binding cassette, and major histocompatibility complex genes. We further evaluated their impacts on downstream bioinformatics and CNV association analyses. Our results revealed pitfalls caused by false positive and lineage-differential copy number variations and will increase the accuracy of future CNV studies in both taurine and indicine cattle.
Lee, Jungpyo; Wright, John; Bertelli, Nicola; ...
2017-04-24
In this study, a reduced model of quasilinear velocity diffusion by a small Larmor radius approximation is derived to couple the Maxwell’s equations and the Fokker Planck equation self-consistently for the ion cyclotron range of frequency waves in a tokamak. The reduced model ensures the important properties of the full model by Kennel-Engelmann diffusion, such as diffusion directions, wave polarizations, and H-theorem. The kinetic energy change (Wdot ) is used to derive the reduced model diffusion coefficients for the fundamental damping (n = 1) and the second harmonic damping (n = 2) to the lowest order of the finite Larmormore » radius expansion. The quasilinear diffusion coefficients are implemented in a coupled code (TORIC-CQL3D) with the equivalent reduced model of the dielectric tensor. We also present the simulations of the ITER minority heating scenario, in which the reduced model is verified within the allowable errors from the full model results.« less
Month-to-Month and Year-to-Year Reproducibility of High Frequency QRS ECG signals
NASA Technical Reports Server (NTRS)
Batdorf, Niles; Feiveson, Alan H.; Schlegel, Todd T.
2006-01-01
High frequency (HF) electrocardiography analyzing the entire QRS complex in the frequency range of 150 to 250 Hz may prove useful in the detection of coronary artery disease, yet the long-term stability of these waveforms has not been fully characterized. We therefore prospectively investigated the reproducibility of the root mean squared (RMS) voltage, kurtosis, and the presence versus absence of reduced amplitude zones (RAzs) in signal averaged 12-lead HF QRS recordings acquired in the supine position one month apart in 16 subjects and one year apart in 27 subjects. Reproducibility of RMS voltage and kurtosis was excellent over these time intervals in the limb leads, and acceptable in the precordial leads using both the V-lead and CR-lead derivations. The relative error of RMS voltage was 12% month-to-month and 16% year-to-year in the serial recordings when averaged over all 12 leads. RAzs were also reproducible at a rate of up to 87% and 8 1 %, respectively, for the month-to-month and year-to-year recordings. We conclude that 12-lead HF QRS electrocardiograms are sufficiently reproducible for clinical use.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-01
When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.
Finite-difference time-domain synthesis of infrasound propagation through an absorbing atmosphere.
de Groot-Hedlin, C
2008-09-01
Equations applicable to finite-difference time-domain (FDTD) computation of infrasound propagation through an absorbing atmosphere are derived and examined in this paper. It is shown that over altitudes up to 160 km, and at frequencies relevant to global infrasound propagation, i.e., 0.02-5 Hz, the acoustic absorption in dB/m varies approximately as the square of the propagation frequency plus a small constant term. A second-order differential equation is presented for an atmosphere modeled as a compressible Newtonian fluid with low shear viscosity, acted on by a small external damping force. It is shown that the solution to this equation represents pressure fluctuations with the attenuation indicated above. Increased dispersion is predicted at altitudes over 100 km at infrasound frequencies. The governing propagation equation is separated into two partial differential equations that are first order in time for FDTD implementation. A numerical analysis of errors inherent to this FDTD method shows that the attenuation term imposes additional stability constraints on the FDTD algorithm. Comparison of FDTD results for models with and without attenuation shows that the predicted transmission losses for the attenuating media agree with those computed from synthesized waveforms.
Mitigating leakage errors due to cavity modes in a superconducting quantum computer
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.
2018-07-01
A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.
Novel parametric reduced order model for aeroengine blade dynamics
NASA Astrophysics Data System (ADS)
Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis
2015-10-01
The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
Broadband CARS spectral phase retrieval using a time-domain Kramers–Kronig transform
Liu, Yuexin; Lee, Young Jong; Cicerone, Marcus T.
2014-01-01
We describe a closed-form approach for performing a Kramers–Kronig (KK) transform that can be used to rapidly and reliably retrieve the phase, and thus the resonant imaginary component, from a broadband coherent anti-Stokes Raman scattering (CARS) spectrum with a nonflat background. In this approach we transform the frequency-domain data to the time domain, perform an operation that ensures a causality criterion is met, then transform back to the frequency domain. The fact that this method handles causality in the time domain allows us to conveniently account for spectrally varying nonresonant background from CARS as a response function with a finite rise time. A phase error accompanies KK transform of data with finite frequency range. In examples shown here, that phase error leads to small (<1%) errors in the retrieved resonant spectra. PMID:19412273
A new method for weakening the combined effect of residual errors on multibeam bathymetric data
NASA Astrophysics Data System (ADS)
Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue
2014-12-01
Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.
Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.
ERIC Educational Resources Information Center
Monagle, E. Brette
The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size distribution both within a given rain event and across different varieties of rain events. Index Terms-drop size distribution, frequency scaling, propagation losses, radiowave propagation.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size distribution both within a given rain event and across different varieties of rain events. Index Terms-drop size distribution, frequency scaling, propagation losses, radiowave propagation.
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Excitations for Rapidly Estimating Flight-Control Parameters
NASA Technical Reports Server (NTRS)
Moes, Tim; Smith, Mark; Morelli, Gene
2006-01-01
A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the derivatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Sudeep; Louis, Thibaut; Calabrese, Erminia
2014-04-01
We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and showmore » they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ΛCDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6σ detection significance.« less
NASA Technical Reports Server (NTRS)
Das, Sudeep; Louis, Thibaut; Nolta, Michael R.; Addison, Graeme E.; Battisetti, Elia S.; Bond, J. Richard; Calabrese, Erminia; Crichton, Devin; Devlin, Mark J.; Dicker, Simon;
2014-01-01
We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and show they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ?CDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6s detection significance.
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Impact of lateral boundary conditions on regional analyses
NASA Astrophysics Data System (ADS)
Chikhar, Kamel; Gauthier, Pierre
2017-04-01
Regional and global climate models are usually validated by comparison to derived observations or reanalyses. Using a model in data assimilation results in a direct comparison to observations to produce its own analyses that may reveal systematic errors. In this study, regional analyses over North America are produced based on the fifth-generation Canadian Regional Climate Model (CRCM5) combined with the variational data assimilation system of the Meteorological Service of Canada (MSC). CRCM5 is driven at its boundaries by global analyses from ERA-interim or produced with the global configuration of the CRCM5. Assimilation cycles for the months of January and July 2011 revealed systematic errors in winter through large values in the mean analysis increments. This bias is attributed to the coupling of the lateral boundary conditions of the regional model with the driving data particularly over the northern boundary where a rapidly changing large scale circulation created significant cross-boundary flows. Increasing the time frequency of the lateral driving and applying a large-scale spectral nudging improved significantly the circulation through the lateral boundaries which translated in a much better agreement with observations.
Effects of Time-Dependent Inflow Perturbations on Turbulent Flow in a Street Canyon
NASA Astrophysics Data System (ADS)
Duan, G.; Ngan, K.
2017-12-01
Urban flow and turbulence are driven by atmospheric flows with larger horizontal scales. Since building-resolving computational fluid dynamics models typically employ steady Dirichlet boundary conditions or forcing, the accuracy of numerical simulations may be limited by the neglect of perturbations. We investigate the sensitivity of flow within a unit-aspect-ratio street canyon to time-dependent perturbations near the inflow boundary. Using large-eddy simulation, time-periodic perturbations to the streamwise velocity component are incorporated via the nudging technique. Spatial averages of pointwise differences between unperturbed and perturbed velocity fields (i.e., the error kinetic energy) show a clear dependence on the perturbation period, though spatial structures are largely insensitive to the time-dependent forcing. The response of the error kinetic energy is maximized for perturbation periods comparable to the time scale of the mean canyon circulation. Frequency spectra indicate that this behaviour arises from a resonance between the inflow forcing and the mean motion around closed streamlines. The robustness of the results is confirmed using perturbations derived from measurements of roof-level wind speed.