Sample records for minimum phase error

  1. Adaptive feedforward control of non-minimum phase structural systems

    NASA Astrophysics Data System (ADS)

    Vipperman, J. S.; Burdisso, R. A.

    1995-06-01

    Adaptive feedforward control algorithms have been effectively applied to stationary disturbance rejection. For structural systems, the ideal feedforward compensator is a recursive filter which is a function of the transfer functions between the disturbance and control inputs and the error sensor output. Unfortunately, most control configurations result in a non-minimum phase control path; even a collocated control actuator and error sensor will not necessarily produce a minimum phase control path in the discrete domain. Therefore, the common practice is to choose a suitable approximation of the ideal compensator. In particular, all-zero finite impulse response (FIR) filters are desirable because of their inherent stability for adaptive control approaches. However, for highly resonant systems, large order filters are required for broadband applications. In this work, a control configuration is investigated for controlling non-minimum phase lightly damped structural systems. The control approach uses low order FIR filters as feedforward compensators in a configuration that has one more control actuator than error sensors. The performance of the controller was experimentally evaluated on a simply supported plate under white noise excitation for a two-input, one-output (2I1O) system. The results show excellent error signal reduction, attesting to the effectiveness of the method.

  2. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  3. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  4. Analysis of the PLL phase error in presence of simulated ionospheric scintillation events

    NASA Astrophysics Data System (ADS)

    Forte, B.

    2012-01-01

    The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).

  5. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  6. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  7. On higher order discrete phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Gill, G. S.; Gupta, S. C.

    1972-01-01

    An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.

  8. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  9. Robustification and Optimization in Repetitive Control For Minimum Phase and Non-Minimum Phase Systems

    NASA Astrophysics Data System (ADS)

    Prasitmeeboon, Pitcha

    Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.

  10. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less

  11. Velocity encoding with the slice select refocusing gradient for faster imaging and reduced chemical shift-induced phase errors.

    PubMed

    Middione, Matthew J; Thompson, Richard B; Ennis, Daniel B

    2014-06-01

    To investigate a novel phase-contrast MRI velocity-encoding technique for faster imaging and reduced chemical shift-induced phase errors. Velocity encoding with the slice select refocusing gradient achieves the target gradient moment by time shifting the refocusing gradient, which enables the use of the minimum in-phase echo time (TE) for faster imaging and reduced chemical shift-induced phase errors. Net forward flow was compared in 10 healthy subjects (N = 10) within the ascending aorta (aAo), main pulmonary artery (PA), and right/left pulmonary arteries (RPA/LPA) using conventional flow compensated and flow encoded (401 Hz/px and TE = 3.08 ms) and slice select refocused gradient velocity encoding (814 Hz/px and TE = 2.46 ms) at 3 T. Improved net forward flow agreement was measured across all vessels for slice select refocused gradient compared to flow compensated and flow encoded: aAo vs. PA (1.7% ± 1.9% vs. 5.8% ± 2.8%, P = 0.002), aAo vs. RPA + LPA (2.1% ± 1.7% vs. 6.0% ± 4.3%, P = 0.03), and PA vs. RPA + LPA (2.9% ± 2.1% vs. 6.1% ± 6.3%, P = 0.04), while increasing temporal resolution (35%) and signal-to-noise ratio (33%). Slice select refocused gradient phase-contrast MRI with a high receiver bandwidth and minimum in-phase TE provides more accurate and less variable flow measurements through the reduction of chemical shift-induced phase errors and a reduced TE/repetition time, which can be used to increase the temporal/spatial resolution and/or reduce breath hold durations. Copyright © 2013 Wiley Periodicals, Inc.

  12. Neural self-tuning adaptive control of non-minimum phase system

    NASA Technical Reports Server (NTRS)

    Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.

    1993-01-01

    The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.

  13. Determination of vigabatrin in plasma by reversed-phase high-performance liquid chromatography.

    PubMed

    Tsanaclis, L M; Wicks, J; Williams, J; Richens, A

    1991-05-01

    A method is described for the determination of vigabatrin in 50 microliters of plasma by isocratic high-performance liquid chromatography using fluorescence detection. The procedure involves protein precipitation with methanol followed by precolumn derivatisation with o-phthaldialdehyde reagent. Separation of the derivatised vigabatrin was achieved on a Microsorb C18 column using a mobile phase of 10 mM orthophosphoric acid:acetonitrile:methanol (6:3:1) at a flow rate of 2.0 ml/min. Assay time is 15 min and chromatograms show no interference from commonly coadministered anticonvulsant drugs. The total analytical error within the range of 0.85-85 micrograms/ml was found to be 7.6% with the within-replicates error of 2.76%. The minimum detection limit was 0.08 micrograms/ml and the minimum quantitation limit was 0.54 micrograms/ml.

  14. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    PubMed

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  15. Multiphase computer-generated holograms for full-color image generation

    NASA Astrophysics Data System (ADS)

    Choi, Kyong S.; Choi, Byong S.; Choi, Yoon S.; Kim, Sun I.; Kim, Jong Man; Kim, Nam; Gil, Sang K.

    2002-06-01

    Multi-phase and binary-phase computer-generated holograms were designed and demonstrated for full-color image generation. Optimize a phase profile of the hologram that achieves each color image, we employed a simulated annealing method. The design binary phase hologram had the diffraction efficiency of 33.23 percent and the reconstruction error of 0.367 X 10-2. And eight phase hologram had the diffraction efficiency of 67.92 percent and the reconstruction error of 0.273 X 10-2. The designed BPH was fabricated by micro photolithographic technique with a minimum pixel width of 5micrometers . And the it was reconstructed using by two Ar-ion lasers and a He-Ne laser. In addition, the color dispersion characteristic of the fabricate grating and scaling problem of the reconstructed image were discussed.

  16. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  17. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  18. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  19. A recent time of minimum for and atmospheric-eclipse in the ultraviolet spectrum of the Wolf-Rayet eclipsing binary V444 Cygni

    NASA Technical Reports Server (NTRS)

    Eaton, J. E.; Cherepashchuk, A. M.; Khaliullin, K. F.

    1982-01-01

    The 1200-1900 angstrom region and fine error sensor observations in the optical for V444 Cyg were continuously observed. More than half of a primary minimum and almost a complete secondary minimum were observed. It is found that the time of minimum for the secondary eclipse is consistent with that for primary eclipse, and the ultraviolet times of minimum are consistent with the optical ones. The spectrum shows a considerable amount of phase dependence. The general shaps and depths of the light curves for the FES signal and the 1565-1900 angstrom continuum are similar to those for the blue continuum. The FES, however, detected an atmospheric eclipse in line absorption at about the phase the NIV absorption was strongest. It is suggested that there is a source of continuum absorption shortward of 1460 angstrom which exists throughout a large part of the extended atmosphere and which, by implication, must redden considerably the ultraviolet continuua of WN stars. A fairly high degree of ionization for the inner part of the WN star a atmosphere is implied.

  20. Maximizing return on socioeconomic investment in phase II proof-of-concept trials.

    PubMed

    Chen, Cong; Beckman, Robert A

    2014-04-01

    Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.

  1. Eliminating ambiguity in digital signals

    NASA Technical Reports Server (NTRS)

    Weber, W. J., III

    1979-01-01

    Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.

  2. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  3. A high speed sequential decoder

    NASA Technical Reports Server (NTRS)

    Lum, H., Jr.

    1972-01-01

    The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.

  4. An adaptive reentry guidance method considering the influence of blackout zone

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Yao, Jianyao; Qu, Xiangju

    2018-01-01

    Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.

  5. Full-field stress determination in photoelasticity with phase shifting technique

    NASA Astrophysics Data System (ADS)

    Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng

    2018-04-01

    Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.

  6. Bracketing mid-pliocene sea surface temperature: maximum and minimum possible warming

    USGS Publications Warehouse

    Dowsett, Harry

    2004-01-01

    Estimates of sea surface temperature (SST) from ocean cores reveal a warm phase of the Pliocene between about 3.3 and 3.0 Mega-annums (Ma). Pollen records from land based cores and sections, although not as well dated, also show evidence for a warmer climate at about the same time. Increased greenhouse forcing and altered ocean heat transport is the leading candidates for the underlying cause of Pliocene global warmth. However, despite being a period of global warmth, there exists considerable variability within this interval. Two new SST reconstructions have been created to provide a climatological error bar for warm peak phases of the Pliocene. These data represent the maximum and minimum possible warming recorded within the 3.3 to 3.0 Ma interval.

  7. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  8. Discovery of X-Ray Emission from the Crab Pulsar at Pulse Minimum

    NASA Technical Reports Server (NTRS)

    Tennant, Allyn F.; Becker, Werner; Juda, Michael; Elsner, Ronald F.; Kolodziejczak, Jeffery J.; Murray, Stephen S.; ODell, Stephen L.; Paerels, Frits; Swartz, Douglas A.

    2001-01-01

    The Chandra X-Ray Observatory observed the Crab pulsar using the Low-Energy Transmission Grating with the High-Resolution Camera. Time-resolved zeroth-order images reveal that the pulsar emits X-rays at all pulse phases. Analysis of the flux at minimum - most likely non-thermal in origin - places an upper limit (T(sub infinity) < 2.1 MK) on the surface temperature of the underlying neutron star. In addition, analysis of the pulse profile establishes that the error in the Chandra-determined absolute time is quite small, -0.2 +/- 0.1 ms.

  9. [Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].

    PubMed

    Guo, Lin; Kang, Lili; Wang, Dandan

    2013-02-01

    The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.

  10. Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana

    USGS Publications Warehouse

    Stewart, J.A.; Miller, R.L.; Butch, G.K.

    1986-01-01

    Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)

  11. Digital signal processor and processing method for GPS receivers

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1989-01-01

    A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.

  12. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  13. Automatic learning rate adjustment for self-supervising autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  14. Single pilot IFR operating problems determined from accidental data analysis

    NASA Technical Reports Server (NTRS)

    Forsyth, D. L.; Shaughnessy, J. D.

    1978-01-01

    The accident reports examined were restricted to instrument rated pilots flying in IFR weather. A brief examination was made of accidents which occurred during all phases of flight and which were due to all causes. A detailed examination was made of those accidents which involved a single pilot which occurred during the landing phases of flight, and were due to pilot error. Problem areas found include: (1) landing phase operations especially final approach, (2) pilot weather briefings, (3) night approaches in low IFR weather, (4) below minimum approaches, (5) aircraft icing, (6) imprecise navigation, (7) descending below minimum IFR altitudes, (8) fuel mismanagement, (9) pilot overconfidence, and (10) high pilot workload especially in twins. Some suggested areas of research included: (1) low cost deicing systems, (2) standardized navigation displays, (3) low cost low-altitude warning systems, (4) improved fuel management systems, (5) improved ATC communications, (6) more effective pilot training and experience acquisition methods, and (7) better weather data dissemination techniques.

  15. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  16. An Improved Model Predictive Current Controller of Switched Reluctance Machines Using Time-Multiplexed Current Sensor

    PubMed Central

    Li, Bingchu; Ling, Xiao; Huang, Yixiang; Gong, Liang; Liu, Chengliang

    2017-01-01

    This paper presents a fixed-switching-frequency model predictive current controller using multiplexed current sensor for switched reluctance machine (SRM) drives. The converter was modified to distinguish currents from simultaneously excited phases during the sampling period. The only current sensor installed in the converter was time division multiplexing for phase current sampling. During the commutation stage, the control steps of adjacent phases were shifted so that sampling time was staggered. The maximum and minimum duty ratio of pulse width modulation (PWM) was limited to keep enough sampling time for analog-to-digital (A/D) conversion. Current sensor multiplexing was realized without complex adjustment of either driver circuit nor control algorithms, while it helps to reduce the cost and errors introduced in current sampling due to inconsistency between sensors. The proposed controller is validated by both simulation and experimental results with a 1.5 kW three-phase 12/8 SRM. Satisfied current sampling is received with little difference compared with independent phase current sensors for each phase. The proposed controller tracks the reference current profile as accurately as the model predictive current controller with independent phase current sensors, while having minor tracking errors compared with a hysteresis current controller. PMID:28513554

  17. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  18. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  20. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  1. Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy

    PubMed Central

    Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern

    2011-01-01

    This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688

  2. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  3. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  4. Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.

    DTIC Science & Technology

    1980-12-01

    OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test

  5. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  6. Accuracy analysis of pointing control system of solar power station

    NASA Technical Reports Server (NTRS)

    Hung, J. C.; Peebles, P. Z., Jr.

    1978-01-01

    The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.

  7. Adaptive control strategies for flexible robotic arm

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1993-01-01

    The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.

  8. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less

  10. Phase unwrapping methods of corner reflector DInSAR monitoring slow ground deformation

    NASA Astrophysics Data System (ADS)

    Fu, Wenxue; Guo, Xiaofang; Tian, Qingjiu

    2007-06-01

    Difference interferometric Synthetic aperture radar (DInSAR) has turned out to be a very powerful technique for the measurement of land deformations, but it requires the observed area to be correlated, and coherence degradation will seriously affect the quality of interferogram. Corner reflector DInSAR (CRDInSAR) is a new technique in recently years, which can compensate for the limitation of the classical DInSAR. Due to the stable amplitude and phase performance of the reflector, the interferometric phase difference of the reflector can be used to monitor or measure the small and slowly ground deformation for the cases of large geometrical baseline and large time interval between acquisitions. Phase unwrapping is the process where the absolute phase is reconstructed from its principal value as accurately as possible. It is a key step in the analysis of DInSAR. The classical phase unwrapping methods are either of path following type or of minimum-norm type. However, if the coherence of the two images is very low, the both methods will get error result. In application of CRDInSAR, due to the scattered points, the phase unwrapping of corner reflectors is only dealt with on a sparse grid, so all the reflectors are connected with Delaunay triangulation firstly, which can be used to define neighboring points and elementary cycles. When the monitoring ground deformation is slow, that is unwrapped neighboring-CR phase gradients are supposed to equal their wrapped-phase counterparts, then path-following method and Phase unwrapping using Coefficient of Elevation-Phase-Relation can be used to phase unwrapping. However, in the cases of unwrapped gradients exceeding one-half cycle, minimum cost flow (MCF) method can be used to unwrap the interferogram.

  11. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.

    PubMed

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-23

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.

  12. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  13. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  14. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  15. Inhomogeneous models of the Venus clouds containing sulfur

    NASA Technical Reports Server (NTRS)

    Smith, S. M.; Pollack, J. B.; Giver, L. P.; Cuzzi, J. N.; Podolak, M.

    1979-01-01

    Based on the suggestion that elemental sulfur is responsible for the yellow color of Venus, calculations are compared at 3.4 microns of the reflectivity phase function of two sulfur containing inhomogeneous cloud models with that of a homogeneous model. Assuming reflectivity observations with 25% or less total error, comparison of the model calculations leads to a minimum detectable mass of sulfur equal to 7% of the mass of sulfuric acid for the inhomogeneous drop model. For the inhomogeneous cloud model the comparison leads to a minimum detectable mass of sulfur between 17% and 38% of the mass of the acid drops, depending upon the actual size of the large particles. It is concluded that moderately accurate 3.4 microns reflectivity observations are capable of detecting quite small amounts of elemental sulfur at the top of the Venus clouds.

  16. Investigation of Bandwidth-Efficient Coding and Modulation Techniques

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    The necessary technology was studied to improve the bandwidth efficiency of the space-to-ground communications network using the current capabilities of that network as a baseline. The study was aimed at making space payloads, for example the Hubble Space Telescope, more capable without the need to completely redesign the link. Particular emphasis was placed on the following concepts: (1) what the requirements are which are necessary to convert an existing standard 4-ary phase shift keying communications link to one that can support, as a minimum, 8-ary phase shift keying with error corrections applied; and (2) to determine the feasibility of using the existing equipment configurations with additional signal processing equipment to realize the higher order modulation and coding schemes.

  17. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  18. Navigator alignment using radar scan

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2016-04-05

    The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.

  19. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  20. Design and performance of mobile terminal for North American MSAT network

    NASA Technical Reports Server (NTRS)

    Fuji, Tsuyoshi; Tsuchiya, Makio; Isota, Yoji; Aoki, Katsuhiko

    1995-01-01

    The mobile terminal (MT), which can be selected for various applications, i.e. land mobile, transportable, fixed site, and maritime use, has been developed. Medium gain and high gain antennas are available. The MT can support circuit switched voice and data service. Additionally, cellular roaming service, net radio, and Group 3 facsimile services are optionally provided. A Mitsubishi handheld portable phone can be used as a stand-alone portable cellular-only phone or it can provide MSAT voice service when connected to MT. The MT which operates in L-band (1.5 GHz/1.6 GHz) satisfies equivalent isotropically radiated power (EIRP) of 12.5 dBW minimum and G/T of -16 dB/K minimum for medium gain system and -12 dB/K for high gain system. The excellent performance of transmit phase noise and bit error rate is achieved by using new technologies.

  1. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  2. Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.

    2006-01-01

    In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal

  3. MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim

    2018-01-01

    A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.

  4. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  5. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  6. ERP correlates of error processing during performance on the Halstead Category Test.

    PubMed

    Santos, I M; Teixeira, A R; Tomé, A M; Pereira, A T; Rodrigues, P; Vagos, P; Costa, J; Carrito, M L; Oliveira, B; DeFilippis, N A; Silva, C F

    2016-08-01

    The Halstead Category Test (HCT) is a neuropsychological test that measures a person's ability to formulate and apply abstract principles. Performance must be adjusted based on feedback after each trial and errors are common until the underlying rules are discovered. Event-related potential (ERP) studies associated with the HCT are lacking. This paper demonstrates the use of a methodology inspired on Singular Spectrum Analysis (SSA) applied to EEG signals, to remove high amplitude ocular and movement artifacts during performance on the test. This filtering technique introduces no phase or latency distortions, with minimum loss of relevant EEG information. Importantly, the test was applied in its original clinical format, without introducing adaptations to ERP recordings. After signal treatment, the feedback-related negativity (FRN) wave, which is related to error-processing, was identified. This component peaked around 250ms, after feedback, in fronto-central electrodes. As expected, errors elicited more negative amplitudes than correct responses. Results are discussed in terms of the increased clinical potential that coupling ERP information with behavioral performance data can bring to the specificity of the HCT in diagnosing different types of impairment in frontal brain function. Copyright © 2016. Published by Elsevier B.V.

  7. NASA Astrophysics Data System (ADS)

    2018-05-01

    The Seebeck coefficient has been used to investigate QCB in Cr alloys [8,9]. Plots of d S /d T (in the limit T → 2 K) as function of concentration for the (Cr97.8Si2.2)100-yMoy [8] and the (Cr84Re16)100-zVz [9] alloy systems depicted anomalies at the QCP. The possibility of QCB in the (Cr100-xAlx)95Mo5 alloy system is explored by analysing the S(T) data of Fig. 1 by performing a linear-least-squares fit through the 2 K < T < 6.5 K data points. The gradient was taken as dS / dT|T → 2K . Fig. 8 shows dS / dT|T → 2K for concentrations in the range 0.5 ≤ x ≤ 8.6. It increases rapidly to a maximum at x = 1.0, then decreases on further Al addition and displays a minimum just above x = 1.4. This is the concentration where magnetism is seen to disappear on the TN(x) magnetic phase diagram. dS / dT|T → 2K shows a second minimum just above x = 4.4, i.e. corresponding to the concentration where magnetism reappears on the TN(x) magnetic phase diagram (see Fig. 17). Similar minima were also observed at the QCP in the (Cr84Re16)100-zVz [9] and (Cr86Ru14)100-rVr [13] alloy systems. The relatively large error bars in Fig. 8 originate from the large errors in the fitting routine due to a significant scatter in the original Seebeck coefficient data at low temperatures. The solid line through the dS / dT|T → 2K data points is a guide to the eye, while the dotted vertical lines indicate the boundaries between the ISDW, P and CSDW phases. The minima observed in the dS / dT|T → 2K curve correlate to these boundaries.

  8. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  9. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    NASA Astrophysics Data System (ADS)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  10. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  11. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  12. Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry

    NASA Astrophysics Data System (ADS)

    Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael

    2015-05-01

    This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.

  13. Research on effects of phase error in phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai

    2007-12-01

    Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.

  14. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  15. A new approach for minimum phase output definition

    NASA Astrophysics Data System (ADS)

    Jahangiri, Fatemeh; Talebi, Heidar Ali; Menhaj, Mohammad Bagher; Ebenbauer, Christian

    2017-01-01

    This paper presents a novel method for output redefinition for linear systems. The approach also determines possible relative degrees for the systems corresponding to any new output vector. To guarantee the minimum phase property with a prescribed relative degree, a set of new conditions is introduced. A key feature of these conditions is that there is no need to any form of transformations which make the scheme suitable for optimisation problems in control to ensure the minimum phase property. Moreover, the results are useful for sensor placement problems and for obtaining minimum phase approximations of non-minimum phase systems. Numerical examples including an example of unmanned aerial vehicle systems are given to demonstrate the effectiveness of the methodology.

  16. Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation

    USGS Publications Warehouse

    Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.

    1984-01-01

    This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.

  17. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  18. Optical DC overlay measurement in the 2nd level process of 65 nm alternating phase shift mask

    NASA Astrophysics Data System (ADS)

    Ma, Jian; Han, Ke; Lee, Kyung; Korobko, Yulia; Silva, Mary; Chavez, Joas; Irvine, Brian; Henrichs, Sven; Chakravorty, Kishore; Olshausen, Robert; Chandramouli, Mahesh; Mammen, Bobby; Padmanaban, Ramaswamy

    2005-11-01

    Alternating phase shift mask (APSM) techniques help bridge the significant gap between the lithography wavelength and the patterning of minimum features, specifically, the poly line of 35 nm gate length (1x) in Intel's 65 nm technology. One of key steps in making APSM mask is to pattern to within the design tolerances the 2nd level resist so that the zero-phase apertures will be protected by the resist and the pi-phase apertures will be wide open for quartz etch. The ability to align the 2nd level to the 1st level binary pattern, i.e. the 2nd level overlay capability is very important, so is the capability of measuring the overlay accurately. Poor overlay could cause so-called the encroachment after quartz etch, producing undesired quartz bumps in the pi-apertures or quartz pits in the zero-apertures. In this paper, a simple, low-cost optical setup for the 2nd level DC (develop check) overlay measurements in the high volume manufacturing (HVM) of APSM masks is presented. By removing systematic errors in overlay associated with TIS and MIS (tool-induced shift and Mask-process induced shift), it is shown that this setup is capable of supporting the measurement of DC overlay with a tolerance as small as +/- 25 nm. The outstanding issues, such as DC overlay error component analysis, DC - FC (final check) overlay correlation and the overlay linearity (periphery vs. indie), are discussed.

  19. Using certification trails to achieve software fault tolerance

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Masson, Gerald M.

    1993-01-01

    A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.

  20. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  1. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  2. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  3. Simultaneous refractive index and temperature measurements using a tapered bend-resistant fiber interferometer.

    PubMed

    Lu, Ping; Harris, Jeremie; Xu, Yanping; Lu, Yuangang; Chen, Liang; Bao, Xiaoyi

    2012-11-15

    Simultaneous measurements of refractive index (RI) and temperature are proposed and experimentally demonstrated by using a tapered bend-resistant fiber interferometer. Different phase shifts of an inner and outer cladding mode of the fiber interferometer are measured to determine the temperature compensated RI of a glycerol solution. The temperature coefficients of the inner and outer cladding modes are -0.0253 rad/°C and -0.0523 rad/°C, and the RI coefficients are 4.0403 rad/RIU and 44.823 rad/RIU, respectively. The minimum errors of temperature and RI are 0.6°C and 0.001 RIU, respectively.

  4. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  5. Speedup of minimum discontinuity phase unwrapping algorithm with a reference phase distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yihang; Han, Yu; Li, Fengjiao; Zhang, Qican

    2018-06-01

    In three-dimensional (3D) shape measurement based on phase analysis, the phase analysis process usually produces a wrapped phase map ranging from - π to π with some 2 π discontinuities, and thus a phase unwrapping algorithm is necessary to recover the continuous and nature phase map from which 3D height distribution can be restored. Usually, the minimum discontinuity phase unwrapping algorithm can be used to solve many different kinds of phase unwrapping problems, but its main drawback is that it requires a large amount of computations and has low efficiency in searching for the improving loop within the phase's discontinuity area. To overcome this drawback, an improvement to speedup of the minimum discontinuity phase unwrapping algorithm by using the phase distribution on reference plane is proposed. In this improved algorithm, before the minimum discontinuity phase unwrapping algorithm is carried out to unwrap phase, an integer number K was calculated from the ratio of the wrapped phase to the nature phase on a reference plane. And then the jump counts of the unwrapped phase can be reduced by adding 2K π, so the efficiency of the minimum discontinuity phase unwrapping algorithm is significantly improved. Both simulated and experimental data results verify the feasibility of the proposed improved algorithm, and both of them clearly show that the algorithm works very well and has high efficiency.

  6. Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)

    DTIC Science & Technology

    2011-01-01

    Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The

  7. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  8. What We Have Learned About the Existing Trace Element Partitioning data During the Population Phase of traceDs

    NASA Astrophysics Data System (ADS)

    Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.

    2015-12-01

    The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication of all analyses with error based on peak count rates and background, plus information on the structural state of the mineral (e.g. orthopyroxene vs. pigeonite).

  9. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  10. Reference respiratory waveforms by minimum jerk model analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka

    Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimummore » jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory phase was improved in the minimum jerk theoretical model by 7.0% and 13% compared with that of the waveforms modeled by cosine and free-breathing model, respectively. Conclusions: The minimum jerk theoretical respiratory wave can achieve smooth tracking by CyberKnife{sup ®} and may provide patient-specific respiratory modeling, which may be useful for respiratory training and coaching, as well as quality assurance of the mechanical CyberKnife{sup ®} robotic trajectory.« less

  11. Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging

    PubMed Central

    Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.

    2017-01-01

    Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049

  12. VizieR Online Data Catalog: WISE/NEOWISE Mars-crossing asteroids (Ali-Lagoa+, 2017)

    NASA Astrophysics Data System (ADS)

    Ali-Lagoa, V.; Delbo, M.

    2017-07-01

    We fitted the near-Earth asteroid thermal model of Harris (1998, Icarus, 131, 29) to WISE/NEOWISE thermal infrared data (see, e.g., Mainzer et al. 2011ApJ...736..100M, and Masiero et al. 2014, Cat. J/ApJ/791/121). The table contains the best-fitting values of size and beaming parameter. We note that the beaming parameter is a strictly positive quantity, but a negative sign is given to indicate whenever we could not fit it and had to assume a default value. We also provide the visible geometric albedos computed from the diameter and the tabulated absolute magnitudes. Minimum relative errors of 10, 15, and 20 percent should be considered for size, beaming parameter and albedo in those cases for which the beaming parameter could be fitted. Otherwise, the minimum relative errors in size and albedo increase to 20 and 40 percent (see, e.g., Mainzer et al. 2011ApJ...736..100M). The asteroid absolute magnitudes and slope parameters retrieved from the Minor Planet Center (MPC) are included, as well as the number of observations used in each WISE band (nW2, nW3, nW4) and the corresponding average values of heliocentric and geocentric distances and phase angle of the observations. The ephemerides were retrieved from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).

  13. Continuous fractional-order Zero Phase Error Tracking Control.

    PubMed

    Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan

    2018-04-01

    A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Industries eligible for minimum wage phase-in. 510.22 Section 510.22 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in...

  15. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Industries eligible for minimum wage phase-in. 510.22... ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in. (a) Appendix A contains a listing of all industries included in the Census of Manufacturing. Appendix...

  16. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Industries eligible for minimum wage phase-in. 510.22... ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in. (a) Appendix A contains a listing of all industries included in the Census of Manufacturing. Appendix...

  17. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Industries eligible for minimum wage phase-in. 510.22... ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in. (a) Appendix A contains a listing of all industries included in the Census of Manufacturing. Appendix...

  18. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Industries eligible for minimum wage phase-in. 510.22... ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in. (a) Appendix A contains a listing of all industries included in the Census of Manufacturing. Appendix...

  19. Two-sample binary phase 2 trials with low type I error and low sample size

    PubMed Central

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A.

    2017-01-01

    Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686

  20. Research on the system performance evaluation of minimum-shift keying in uplink ground-to-satellite with gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Zhang, Ao; Ma, Jing

    2017-07-01

    Minimum-shift keying (MSK) has the advantages of constant envelope, continuous phase, and high spectral efficiency, and it is applied in radio communication and optical fiber communication. MSK modulation of coherent detection is proposed in the ground-to-satellite laser communication system; in addition, considering the inherent noise of uplink, such as intensity scintillation and beam wander, the communication performance of the MSK modulation system with coherent detection is studied in the uplink ground-to-satellite laser. Based on the gamma-gamma channel model, the closed form of bit error rate (BER) of MSK modulation with coherent detection is derived. In weak, medium, and strong turbulence, the BER performance of the MSK modulation system is simulated and analyzed. To meet the requirements of the ground-to-satellite coherent MSK system to optimize the parameters and configuration of the transmitter and receiver, the influence of the beam divergence angle, the zenith angle, the transmitter beam radius, and the receiver diameter are studied.

  1. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  2. The orbit of asteroid (99942) Apophis as determined from optical and radar observations

    NASA Astrophysics Data System (ADS)

    Vinogradova, T. A.; Kochetova, O. M.; Chernetenko, Yu. A.; Shor, V. A.; Yagudina, E. I.

    2008-08-01

    The results of improving the orbit accuracy for the asteroid Apophis and the circumstances of its approach to Earth in 2029 are described. Gravitational perturbations from all of the major planets and Pluto, Ceres, Pallas, and Vesta are taken into account in the equations of motion of the asteroid. Relativistic perturbations from the Sun and perturbations due to the oblateness of the Sun and Earth and due to the light pressure are also included in the model. Perturbations from the Earth and Moon are considered separately. The coordinates of the perturbing bodies are calculated using DE405. The phase correction and the gravitational deflection of light are taken into account. The numerical integration of the equations of motion and equations in variations is performed by the 15th-order Everhart method. The error of the numerical integration over the 2005 2029 interval, estimated using forward and backward computations, is not more than 3 × 10-11 AU. Improved coordinates and velocities at epoch JD2454200.5 (April 10, 2007) were obtained applying the weighted leastsquares fit. For the period from March 15, 2004, to August 16, 2006, 989 optical and 7 radar observations were used. The resulting system represents the optical observations with an error of 0.37 (66 conditional equations were rejected). The residuals of the radar observations are an order, or more, smaller than their errors. The system of Apophis’ elements and the estimates of their precision obtained in this study are in perfect agreement with the results published by other authors. The minimum Apophis-Earth distance is about 38 200 km on April 13, 2029. This estimate agrees to within 20 km with those calculated based on other published systems of elements. The effect of some model components on the minimum distance is estimated.

  3. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  4. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  5. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  6. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  7. 3D absolute shape measurement of live rabbit hearts with a superfast two-frequency phase-shifting technique

    PubMed Central

    Wang, Yajun; Laughner, Jacob I.; Efimov, Igor R.; Zhang, Song

    2013-01-01

    This paper presents a two-frequency binary phase-shifting technique to measure three-dimensional (3D) absolute shape of beating rabbit hearts. Due to the low contrast of the cardiac surface, the projector and the camera must remain focused, which poses challenges for any existing binary method where the measurement accuracy is low. To conquer this challenge, this paper proposes to utilize the optimal pulse width modulation (OPWM) technique to generate high-frequency fringe patterns, and the error-diffusion dithering technique to produce low-frequency fringe patterns. Furthermore, this paper will show that fringe patterns produced with blue light provide the best quality measurements compared to fringe patterns generated with red or green light; and the minimum data acquisition speed for high quality measurements is around 800 Hz for a rabbit heart beating at 180 beats per minute. PMID:23482151

  8. Beating-heart registration for organ-mounted robots.

    PubMed

    Wood, Nathan A; Schwartzman, David; Passineau, Michael J; Moraca, Robert J; Zenati, Marco A; Riviere, Cameron N

    2018-03-06

    Organ-mounted robots address the problem of beating-heart surgery by adhering to the heart, passively providing a platform that approaches zero relative motion. Because of the quasi-periodic deformation of the heart due to heartbeat and respiration, registration must address not only spatial registration but also temporal registration. Motion data were collected in the porcine model in vivo (N = 6). Fourier series models of heart motion were developed. By comparing registrations generated using an iterative closest-point approach at different phases of respiration, the phase corresponding to minimum registration distance is identified. The spatiotemporal registration technique presented here reduces registration error by an average of 4.2 mm over the 6 trials, in comparison with a more simplistic static registration that merely averages out the physiological motion. An empirical metric for spatiotemporal registration of organ-mounted robots is defined and demonstrated using data from animal models in vivo. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Contextual Advantage for State Discrimination

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.

    2018-02-01

    Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  10. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  11. Circular carrier squeezing interferometry: Suppressing phase shift error in simultaneous phase-shifting point-diffraction interferometer

    NASA Astrophysics Data System (ADS)

    Zheng, Donghui; Chen, Lei; Li, Jinpeng; Sun, Qinyuan; Zhu, Wenhua; Anderson, James; Zhao, Jian; Schülzgen, Axel

    2018-03-01

    Circular carrier squeezing interferometry (CCSI) is proposed and applied to suppress phase shift error in simultaneous phase-shifting point-diffraction interferometer (SPSPDI). By introducing a defocus, four phase-shifting point-diffraction interferograms with circular carrier are acquired, and then converted into linear carrier interferograms by a coordinate transform. Rearranging the transformed interferograms into a spatial-temporal fringe (STF), so the error lobe will be separated from the phase lobe in the Fourier spectrum of the STF, and filtering the phase lobe to calculate the extended phase, when combined with the corresponding inverse coordinate transform, exactly retrieves the initial phase. Both simulations and experiments validate the ability of CCSI to suppress the ripple error generated by the phase shift error. Compared with carrier squeezing interferometry (CSI), CCSI is effective on some occasions in which a linear carrier is difficult to introduce, and with the added benefit of eliminating retrace error.

  12. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  13. Force and Directional Force Modulation Effects on Accuracy and Variability in Low-Level Pinch Force Tracking.

    PubMed

    Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence

    2018-01-01

    The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.

  14. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  15. Seasonal prediction skill of winter temperature over North India

    NASA Astrophysics Data System (ADS)

    Tiwari, P. R.; Kar, S. C.; Mohanty, U. C.; Dey, S.; Kumari, S.; Sinha, P.

    2016-04-01

    The climatology, amplitude error, phase error, and mean square skill score (MSSS) of temperature predictions from five different state-of-the-art general circulation models (GCMs) have been examined for the winter (December-January-February) seasons over North India. In this region, temperature variability affects the phenological development processes of wheat crops and the grain yield. The GCM forecasts of temperature for a whole season issued in November from various organizations are compared with observed gridded temperature data obtained from the India Meteorological Department (IMD) for the period 1982-2009. The MSSS indicates that the models have skills of varying degrees. Predictions of maximum and minimum temperature obtained from the National Centers for Environmental Prediction (NCEP) climate forecast system model (NCEP_CFSv2) are compared with station level observations from the Snow and Avalanche Study Establishment (SASE). It has been found that when the model temperatures are corrected to account the bias in the model and actual orography, the predictions are able to delineate the observed trend compared to the trend without orography correction.

  16. Stack Number Influence on the Accuracy of Aster Gdem (V2)

    NASA Astrophysics Data System (ADS)

    Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.

    2017-09-01

    In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.

  17. Characterization of the International Linear Collider damping ring optics

    NASA Astrophysics Data System (ADS)

    Shanks, J.; Rubin, D. L.; Sagan, D.

    2014-10-01

    A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.

  18. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.

  19. Thermal transport properties, magnetic susceptibility and neutron diffraction studies of the (Cr100-xAlx)95Mo5 alloy system

    NASA Astrophysics Data System (ADS)

    Muchono, B.; Sheppard, C. J.; Venter, A. M.; Prinsloo, A. R. E.

    2018-05-01

    The Seebeck coefficient has been used to investigate QCB in Cr alloys [8,9]. Plots of d S /d T (in the limit T → 2 K) as function of concentration for the (Cr97.8Si2.2)100-yMoy [8] and the (Cr84Re16)100-zVz [9] alloy systems depicted anomalies at the QCP. The possibility of QCB in the (Cr100-xAlx)95Mo5 alloy system is explored by analysing the S(T) data of Fig. 1 by performing a linear-least-squares fit through the 2 K < T < 6.5 K data points. The gradient was taken as dS / dT|T → 2K . Fig. 8 shows dS / dT|T → 2K for concentrations in the range 0.5 ≤ x ≤ 8.6. It increases rapidly to a maximum at x = 1.0, then decreases on further Al addition and displays a minimum just above x = 1.4. This is the concentration where magnetism is seen to disappear on the TN(x) magnetic phase diagram. dS / dT|T → 2K shows a second minimum just above x = 4.4, i.e. corresponding to the concentration where magnetism reappears on the TN(x) magnetic phase diagram (see Fig. 17). Similar minima were also observed at the QCP in the (Cr84Re16)100-zVz [9] and (Cr86Ru14)100-rVr [13] alloy systems. The relatively large error bars in Fig. 8 originate from the large errors in the fitting routine due to a significant scatter in the original Seebeck coefficient data at low temperatures. The solid line through the dS / dT|T → 2K data points is a guide to the eye, while the dotted vertical lines indicate the boundaries between the ISDW, P and CSDW phases. The minima observed in the dS / dT|T → 2K curve correlate to these boundaries.

  20. Estimation of representative elementary volume for DNAPL saturation and DNAPL-water interfacial areas in 2D heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun

    2017-06-01

    Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.

  1. Statistics of the radiated field of a space-to-earth microwave power transfer system

    NASA Technical Reports Server (NTRS)

    Stevens, G. H.; Leininger, G.

    1976-01-01

    Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).

  2. A multi-frequency inverse-phase error compensation method for projector nonlinear in 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Mao, Cuili; Lu, Rongsheng; Liu, Zhijian

    2018-07-01

    In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.

  3. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  4. Optimal plane search method in blood flow measurements by magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Bargiel, Pawel; Orkisz, Maciej; Przelaskowski, Artur; Piatkowska-Janko, Ewa; Bogorodzki, Piotr; Wolak, Tomasz

    2004-07-01

    This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.

  5. Phase error statistics of a phase-locked loop synchronized direct detection optical PPM communication system

    NASA Technical Reports Server (NTRS)

    Natarajan, Suresh; Gardner, C. S.

    1987-01-01

    Receiver timing synchronization of an optical Pulse-Position Modulation (PPM) communication system can be achieved using a phased-locked loop (PLL), provided the photodetector output is suitably processed. The magnitude of the PLL phase error is a good indicator of the timing error at the receiver decoder. The statistics of the phase error are investigated while varying several key system parameters such as PPM order, signal and background strengths, and PPL bandwidth. A practical optical communication system utilizing a laser diode transmitter and an avalanche photodiode in the receiver is described, and the sampled phase error data are presented. A linear regression analysis is applied to the data to obtain estimates of the relational constants involving the phase error variance and incident signal power.

  6. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  7. Effects of sharing information on drug administration errors in pediatric wards: a pre–post intervention study

    PubMed Central

    Chua, Siew-Siang; Choo, Sim-Mei; Sulaiman, Che Zuraini; Omar, Asma; Thong, Meow-Keong

    2017-01-01

    Background and purpose Drug administration errors are more likely to reach the patient than other medication errors. The main aim of this study was to determine whether the sharing of information on drug administration errors among health care providers would reduce such problems. Patients and methods This study involved direct, undisguised observations of drug administrations in two pediatric wards of a major teaching hospital in Kuala Lumpur, Malaysia. This study consisted of two phases: Phase 1 (pre-intervention) and Phase 2 (post-intervention). Data were collected by two observers over a 40-day period in both Phase 1 and Phase 2 of the study. Both observers were pharmacy graduates: Observer 1 just completed her undergraduate pharmacy degree, whereas Observer 2 was doing her one-year internship as a provisionally registered pharmacist in the hospital under study. A drug administration error was defined as a discrepancy between the drug regimen received by the patient and that intended by the prescriber and also drug administration procedures that did not follow standard hospital policies and procedures. Results from Phase 1 of the study were analyzed, presented and discussed with the ward staff before commencement of data collection in Phase 2. Results A total of 1,284 and 1,401 doses of drugs were administered in Phase 1 and Phase 2, respectively. The rate of drug administration errors reduced significantly from Phase 1 to Phase 2 (44.3% versus 28.6%, respectively; P<0.001). Logistic regression analysis showed that the adjusted odds of drug administration errors in Phase 1 of the study were almost three times that in Phase 2 (P<0.001). The most common types of errors were incorrect administration technique and incorrect drug preparation. Nasogastric and intravenous routes of drug administration contributed significantly to the rate of drug administration errors. Conclusion This study showed that sharing of the types of errors that had occurred was significantly associated with a reduction in drug administration errors. PMID:28356748

  8. Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption

    NASA Astrophysics Data System (ADS)

    Mathavaraj, S.; Pandiyan, R.; Padhi, R.

    2017-12-01

    A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.

  9. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    NASA Astrophysics Data System (ADS)

    Gao, Qian

    For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.

  10. Time-dependent phase error correction using digital waveform synthesis

    DOEpatents

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  11. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  12. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  13. Two-sample binary phase 2 trials with low type I error and low sample size.

    PubMed

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A

    2017-04-30

    We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1  > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. A bundle with a preformatted medical order sheet and an introductory course to reduce prescription errors in neonates.

    PubMed

    Palmero, David; Di Paolo, Ermindo R; Beauport, Lydie; Pannatier, André; Tolsa, Jean-François

    2016-01-01

    The objective of this study was to assess whether the introduction of a new preformatted medical order sheet coupled with an introductory course affected prescription quality and the frequency of errors during the prescription stage in a neonatal intensive care unit (NICU). Two-phase observational study consisting of two consecutive 4-month phases: pre-intervention (phase 0) and post-intervention (phase I) conducted in an 11-bed NICU in a Swiss university hospital. Interventions consisted of the introduction of a new preformatted medical order sheet with explicit information supplied, coupled with a staff introductory course on appropriate prescription and medication errors. The main outcomes measured were formal aspects of prescription and frequency and nature of prescription errors. Eighty-three and 81 patients were included in phase 0 and phase I, respectively. A total of 505 handwritten prescriptions in phase 0 and 525 in phase I were analysed. The rate of prescription errors decreased significantly from 28.9% in phase 0 to 13.5% in phase I (p < 0.05). Compared with phase 0, dose errors, name confusion and errors in frequency and rate of drug administration decreased in phase I, from 5.4 to 2.7% (p < 0.05), 5.9 to 0.2% (p < 0.05), 3.6 to 0.2% (p < 0.05), and 4.7 to 2.1% (p < 0.05), respectively. The rate of incomplete and ambiguous prescriptions decreased from 44.2 to 25.7 and 8.5 to 3.2% (p < 0.05), respectively. Inexpensive and simple interventions can improve the intelligibility of prescriptions and reduce medication errors. Medication errors are frequent in NICUs and prescription is one of the most critical steps. CPOE reduce prescription errors, but their implementation is not available everywhere. Preformatted medical order sheet coupled with an introductory course decrease medication errors in a NICU. Preformatted medical order sheet is an inexpensive and readily implemented alternative to CPOE.

  15. Quadrature-quadrature phase-shift keying

    NASA Astrophysics Data System (ADS)

    Saha, Debabrata; Birdsall, Theodore G.

    1989-05-01

    Quadrature-quadrature phase-shift keying (Q2PSK) is a spectrally efficient modulation scheme which utilizes available signal space dimensions in a more efficient way than two-dimensional schemes such as QPSK and MSK (minimum-shift keying). It uses two data shaping pulses and two carriers, which are pairwise quadrature in phase, to create a four-dimensional signal space and increases the transmission rate by a factor of two over QPSK and MSK. However, the bit error rate performance depends on the choice of pulse pair. With simple sinusoidal and cosinusoidal data pulses, the Eb/N0 requirement for Pb(E) = 10 to the -5 is approximately 1.6 dB higher than that of MSK. Without additional constraints, Q2PSK does not maintain constant envelope. However, a simple block coding provides a constant envelope. This coded signal substantially outperforms MSKS and TFM (time-frequency multiplexing) in bandwidth efficiency. Like MSK, Q2PSK also has self-clocking and self-synchronizing ability. An optimum class of pulse shapes for use in Q2PSK-format is presented. One suboptimum realization achieves the Nyquist rate of 2 bits/s/Hz using binary detection.

  16. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2015-01-01

    A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.

  17. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  18. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  19. Phase correction system for automatic focusing of synthetic aperture radar

    DOEpatents

    Eichel, Paul H.; Ghiglia, Dennis C.; Jakowatz, Jr., Charles V.

    1990-01-01

    A phase gradient autofocus system for use in synthetic aperture imaging accurately compensates for arbitrary phase errors in each imaged frame by locating highlighted areas and determining the phase disturbance or image spread associated with each of these highlight areas. An estimate of the image spread for each highlighted area in a line in the case of one dimensional processing or in a sector, in the case of two-dimensional processing, is determined. The phase error is determined using phase gradient processing. The phase error is then removed from the uncorrected image and the process is iteratively performed to substantially eliminate phase errors which can degrade the image.

  20. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  1. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  2. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  3. Error Cost Escalation Through the Project Life Cycle

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  4. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  5. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2009-12-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  6. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2010-03-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  7. Errors in the Extra-Analytical Phases of Clinical Chemistry Laboratory Testing.

    PubMed

    Zemlin, Annalise E

    2018-04-01

    The total testing process consists of various phases from the pre-preanalytical to the post-postanalytical phase, the so-called brain-to-brain loop. With improvements in analytical techniques and efficient quality control programmes, most laboratory errors now occur in the extra-analytical phases. There has been recent interest in these errors with numerous publications highlighting their effect on service delivery, patient care and cost. This interest has led to the formation of various working groups whose mission is to develop standardized quality indicators which can be used to measure the performance of service of these phases. This will eventually lead to the development of external quality assessment schemes to monitor these phases in agreement with ISO15189:2012 recommendations. This review focuses on potential errors in the extra-analytical phases of clinical chemistry laboratory testing, some of the studies performed to assess the severity and impact of these errors and processes that are in place to address these errors. The aim of this review is to highlight the importance of these errors for the requesting clinician.

  8. Ground settlement monitoring based on temporarily coherent points between two SAR acquisitions

    USGS Publications Warehouse

    Zhang, L.; Ding, X.; Lu, Z.

    2011-01-01

    An InSAR analysis approach for identifying and extracting the temporarily coherent points (TCP) that exist between two SAR acquisitions and for determining motions of the TCP is presented for applications such as ground settlement monitoring. TCP are identified based on the spatial characteristics of the range and azimuth offsets of coherent radar scatterers. A method for coregistering TCP based on the offsets of TCP is given to reduce the coregistration errors at TCP. An improved phase unwrapping method based on the minimum cost flow (MCF) algorithm and local Delaunay triangulation is also proposed for sparse TCP data. The proposed algorithms are validated using a test site in Hong Kong. The test results show that the algorithms work satisfactorily for various ground features.

  9. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  10. The Gulliver Effect: The Impact of Error in an Elephantine Subpopulation on Estimates for Lilliputian Subpopulations

    ERIC Educational Resources Information Center

    Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene

    2009-01-01

    An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…

  11. Holographic Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Andersen, G.

    For the last two decades adaptive optics has been used as a technique for correcting imaging applications and directed energy/laser targeting and laser communications systems affected by atmospheric turbulence. Typically these systems are bulky and limited to <10 kHz due to large computing overhead and limited photon efficiencies. Moreover most use zonal wavefront sensors which cannot easily handle extreme scintillation or unexpected obscuration of a pre-set aperture. Here we present a compact, lightweight adaptive optics system with the potential to operate at speeds of MHz. The system utilizes a hologram to perform an all-optical wavefront analysis that removes the need for any computer. Finally, the sensing is made on a modal basis so it is largely insensitive to scintillation and obscuration. We have constructed a prototype device and will present experimental results from our research. The holographic adaptive optics system begins with the creation of a multiplexed hologram. This hologram is created by recording the maximum and minimum response functions of every actuator in the deformable mirror against a unique focused reference beam. When a wavefront of some arbitrary phase is incident on the processed hologram, a number of focal spots are created -- one pair for each actuator in the DM. The absolute phase error at each particular actuator location is simply related to the ratio of the intensity of each pair of spots. In this way we can use an array of photodetectors to give a direct readout of phase error without the need for any calculations. The advantages of holographic adaptive optics are many. To begin with, the measurement of phase error is made all optically, so the wavefront sensor directly controls the actuators in the DM without any computers. Using fast, photon counting photodetectors allows for closed loop correction limited only by the speed of the deformable mirror which in the case of MEMS devices can be 100 kHz or more. All this can be achieved in an extremely compact and lightweight package making it perfectly suited to applications such as UAV surveillance imagery and free space optical communications systems. Lastly, since the correction is made on a modal basis instead of zonal, it is virtually insensitive to scintillation and obscuration.

  12. The influence of the uplink noise on the performance of satellite data transmission systems

    NASA Astrophysics Data System (ADS)

    Dewal, Vrinda P.

    The problem of transmission of binary phase shift keying (BPSK) modulated digital data through a bandlimited nonlinear satellite channel in the presence of uplink, downlink Gaussian noise and intersymbol interface is examined. The satellite transponder is represented by a zero memory bandpass nonlinearity, with AM/AM conversion. The proposed optimum linear receiver structure consists of tapped-delay lines followed by a decision device. The linear receiver is designed to minimize the mean square error that is a function of the intersymbol interface, the uplink and the downlink noise. The minimum mean square error equalizer (MMSE) is derived using the Wiener-Kolmogorov theory. In this receiver, the decision about the transmitted signal is made by taking into account the received sequence of present sample, and the interfering past and future samples, which represent the intersymbol interference (ISI). Illustrative examples of the receiver structures are considered for the nonlinear channels with a symmetrical and asymmetrical frequency responses of the transmitter filter. The transponder nonlinearity is simulated by a polynomial using only the first and the third orders terms. A computer simulation determines the tap gain coefficients of the MMSE equalizer that adapt to the various uplink and downlink noise levels. The performance of the MMSE equalizer is evaluated in terms of an estimate of the average probability of error.

  13. Dependence of neoclassical toroidal viscosity on the poloidal spectrum of applied nonaxisymmetric fields

    DOE PAGES

    Logan, Nikolas C.; Park, Jong -Kyu; Paz-Soldan, Carloa; ...

    2016-02-05

    This paper presents a single mode model that accurately predicts the coupling of applied nonaxisymmetric fields to the plasma response that induces neoclassical toroidal viscosity (NTV) torque in DIII-D H-mode plasmas. The torque is measured and modeled to have a sinusoidal dependence on the relative phase of multiple nonaxisymmetric field sources, including a minimum in which large amounts of nonaxisymmetric drive is decoupled from the NTV torque. This corresponds to the coupling and decoupling of the applied field to a NTV-driving mode spectrum. Modeling using the perturbed equilibrium nonambipolar transport (PENT) code confirms an effective single mode coupling between themore » applied field and the resultant torque, despite its inherent nonlinearity. Lastly, the coupling to the NTV mode is shown to have a similar dependence on the relative phasing as that of the IPEC dominant mode, providing a physical basis for the efficacy of this linear metric in predicting error field correction optima in NTV dominated regimes.« less

  14. Dependence of neoclassical toroidal viscosity on the poloidal spectrum of applied nonaxisymmetric fields

    NASA Astrophysics Data System (ADS)

    Logan, N. C.; Park, J.-K.; Paz-Soldan, C.; Lanctot, M. J.; Smith, S. P.; Burrell, K. H.

    2016-03-01

    This paper presents a single mode model that accurately predicts the coupling of applied nonaxisymmetric fields to the plasma response that induces neoclassical toroidal viscosity (NTV) torque in DIII-D H-mode plasmas. The torque is measured and modeled to have a sinusoidal dependence on the relative phase of multiple nonaxisymmetric field sources, including a minimum in which large amounts of nonaxisymmetric drive is decoupled from the NTV torque. This corresponds to the coupling and decoupling of the applied field to a NTV-driving mode spectrum. Modeling using the perturbed equilibrium nonambipolar transport (PENT) code confirms an effective single mode coupling between the applied field and the resultant torque, despite its inherent nonlinearity. The coupling to the NTV mode is shown to have a similar dependence on the relative phasing as that of the IPEC dominant mode, providing a physical basis for the efficacy of this linear metric in predicting error field correction optima in NTV dominated regimes.

  15. Authentication and Quantitation of Fraud in Extra Virgin Olive Oils Based on HPLC-UV Fingerprinting and Multivariate Calibration

    PubMed Central

    Carranco, Núria; Farrés-Cebrián, Mireia; Saurina, Javier

    2018-01-01

    High performance liquid chromatography method with ultra-violet detection (HPLC-UV) fingerprinting was applied for the analysis and characterization of olive oils, and was performed using a Zorbax Eclipse XDB-C8 reversed-phase column under gradient elution, employing 0.1% formic acid aqueous solution and methanol as mobile phase. More than 130 edible oils, including monovarietal extra-virgin olive oils (EVOOs) and other vegetable oils, were analyzed. Principal component analysis results showed a noticeable discrimination between olive oils and other vegetable oils using raw HPLC-UV chromatographic profiles as data descriptors. However, selected HPLC-UV chromatographic time-window segments were necessary to achieve discrimination among monovarietal EVOOs. Partial least square (PLS) regression was employed to tackle olive oil authentication of Arbequina EVOO adulterated with Picual EVOO, a refined olive oil, and sunflower oil. Highly satisfactory results were obtained after PLS analysis, with overall errors in the quantitation of adulteration in the Arbequina EVOO (minimum 2.5% adulterant) below 2.9%. PMID:29561820

  16. Legal consequences of the moral duty to report errors.

    PubMed

    Hall, Jacqulyn Kay

    2003-09-01

    Increasingly, clinicians are under a moral duty to report errors to the patients who are injured by such errors. The sources of this duty are identified, and its probable impact on malpractice litigation and criminal law is discussed. The potential consequences of enforcing this new moral duty as a minimum in law are noted. One predicted consequence is that the trend will be accelerated toward government payment of compensation for errors. The effect of truth-telling on individuals is discussed.

  17. Design and Optimization of the SPOT Primary Mirror Segment

    NASA Technical Reports Server (NTRS)

    Budinoff, Jason G.; Michaels, Gregory J.

    2005-01-01

    The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit

  18. Minimum constitutive relation error based static identification of beams using force method

    NASA Astrophysics Data System (ADS)

    Guo, Jia; Takewaki, Izuru

    2017-05-01

    A new static identification approach based on the minimum constitutive relation error (CRE) principle for beam structures is introduced. The exact stiffness and the exact bending moment are shown to make the CRE minimal for given displacements to beam damages. A two-step substitution algorithm—a force-method step for the bending moment and a constitutive-relation step for the stiffness—is developed and its convergence is rigorously derived. Identifiability is further discussed and the stiffness in the undeformed region is found to be unidentifiable. An extra set of static measurements is complemented to remedy the drawback. Convergence and robustness are finally verified through numerical examples.

  19. Relation between minimum-error discrimination and optimum unambiguous discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less

  20. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  1. Acquisition, representation, and transfer of models of visuo-motor error

    PubMed Central

    Zhang, Hang; Kulsa, Mila Kirstie C.; Maloney, Laurence T.

    2015-01-01

    We examined how human subjects acquire and represent models of visuo-motor error and how they transfer information about visuo-motor error from one task to a closely related one. The experiment consisted of three phases. In the training phase, subjects threw beanbags underhand towards targets displayed on a wall-mounted touch screen. The distribution of their endpoints was a vertically elongated bivariate Gaussian. In the subsequent choice phase, subjects repeatedly chose which of two targets varying in shape and size they would prefer to attempt to hit. Their choices allowed us to investigate their internal models of visuo-motor error distribution, including the coordinate system in which they represented visuo-motor error. In the transfer phase, subjects repeated the choice phase from a different vantage point, the same distance from the screen but with the throwing direction shifted 45°. From the new vantage point, visuo-motor error was effectively expanded horizontally by . We found that subjects incorrectly assumed an isotropic distribution in the choice phase but that the anisotropy they assumed in the transfer phase agreed with an objectively correct transfer. We also found that the coordinate system used in coding two-dimensional visuo-motor error in the choice phase was effectively one-dimensional. PMID:26057549

  2. Automated design and optimization of flexible booster autopilots via linear programming. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.

    1972-01-01

    COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.

  3. How to Attain an Ultralow Interfacial Tension and a Three-Phase Behavior with a Surfactant Formulation for Enhanced Oil Recovery: A Review. Part 2. Performance Improvement Trends from Winsor's Premise to Currently Proposed Inter- and Intra-Molecular Mixtures.

    PubMed

    Salager, Jean-Louis; Forgiarini, Ana M; Márquez, Laura; Manchego, Lisbeth; Bullón, Johnny

    2013-01-01

    The minimum interfacial tension occurrence along a formulation scan at the so-called optimum formulation is discussed to be related to the interfacial curvature. The attained minimum tension is inversely proportional to the domain size of the bicontinuous microemulsion and to the interfacial layer rigidity, but no accurate prediction is available. The data from a very simple ternary system made of pure products accurately follows the correlation for optimum formulation, and exhibit a linear relationship between the performance index as the logarithm of the minimum tension at optimum, and the formulation variables. This relation is probably too simple when the number of variables is increased as in practical cases. The review of published data for more realistic systems proposed for enhanced oil recovery over the past 30 years indicates a general guidelines following Winsor's basic studies concerning the surfactant-oil-water interfacial interactions. It is well known that the major performance benefits are achieved by blending amphiphilic species at the interface as intermolecular or intramolecular mixtures, sometimes in extremely complex formulations. The complexity is such that a good knowledge of the possible trends and an experienced practical know-how to avoid trial and error are important for the practitioner in enhanced oil recovery.

  4. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  5. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  6. Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry.

    PubMed

    Li, Beiwen; Liu, Ziping; Zhang, Song

    2016-10-03

    We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.

  7. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  8. Subband-Based Group Delay Segmentation of Spontaneous Speech into Syllable-Like Units

    NASA Astrophysics Data System (ADS)

    Nagarajan, T.; Murthy, H. A.

    2004-12-01

    In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolution-based technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25 milliseconds for 67% and 76.6% of the syllable segments, respectively.

  9. Movement trajectory smoothness is not associated with the endpoint accuracy of rapid multi-joint arm movements in young and older adults

    PubMed Central

    Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George

    2013-01-01

    The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101

  10. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less

  11. Spectral factorization of wavefields and wave operators

    NASA Astrophysics Data System (ADS)

    Rickett, James Edward

    Spectral factorization is the problem of finding a minimum-phase function with a given power spectrum. Minimum phase functions have the property that they are causal with a causal (stable) inverse. In this thesis, I factor multidimensional systems into their minimum-phase components. Helical boundary conditions resolve any ambiguities over causality, allowing me to factor multi-dimensional systems with conventional one-dimensional spectral factorization algorithms. In the first part, I factor passive seismic wavefields recorded in two-dimensional spatial arrays. The result provides an estimate of the acoustic impulse response of the medium that has higher bandwidth than autocorrelation-derived estimates. Also, the function's minimum-phase nature mimics the physics of the system better than the zero-phase autocorrelation model. I demonstrate this on helioseismic data recorded by the satellite-based Michelson Doppler Imager (MDI) instrument, and shallow seismic data recorded at Long Beach, California. In the second part of this thesis, I take advantage of the stable-inverse property of minimum-phase functions to solve wave-equation partial differential equations. By factoring multi-dimensional finite-difference stencils into minimum-phase components, I can invert them efficiently, facilitating rapid implicit extrapolation without the azimuthal anisotropy that is observed with splitting approximations. The final part of this thesis describes how to calculate diagonal weighting functions that approximate the combined operation of seismic modeling and migration. These weighting functions capture the effects of irregular subsurface illumination, which can be the result of either the surface-recording geometry, or focusing and defocusing of the seismic wavefield as it propagates through the earth. Since they are diagonal, they can be easily both factored and inverted to compensate for uneven subsurface illumination in migrated images. Experimental results show that applying these weighting functions after migration leads to significantly improved estimates of seismic reflectivity.

  12. Research on calibration error of carrier phase against antenna arraying

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Hou, Xiaomin

    2016-11-01

    It is the technical difficulty of uplink antenna arraying that signals from various quarters can not be automatically aligned at the target in deep space. The size of the far-field power combining gain is directly determined by the accuracy of carrier phase calibration. It is necessary to analyze the entire arraying system in order to improve the accuracy of the phase calibration. This paper analyzes the factors affecting the calibration error of carrier phase of uplink antenna arraying system including the error of phase measurement and equipment, the error of the uplink channel phase shift, the position error of ground antenna, calibration receiver and target spacecraft, the error of the atmospheric turbulence disturbance. Discuss the spatial and temporal autocorrelation model of atmospheric disturbances. Each antenna of the uplink antenna arraying is no common reference signal for continuous calibration. So it must be a system of the periodic calibration. Calibration is refered to communication of one or more spacecrafts in a certain period. Because the deep space targets are not automatically aligned to multiplexing received signal. Therefore the aligned signal should be done in advance on the ground. Data is shown that the error can be controlled within the range of demand by the use of existing technology to meet the accuracy of carrier phase calibration. The total error can be controlled within a reasonable range.

  13. An investigation of reports of Controlled Flight Toward Terrain (CFTT)

    NASA Technical Reports Server (NTRS)

    Porter, R. F.; Loomis, J. P.

    1981-01-01

    Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.

  14. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  15. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.

  16. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  17. Blind phase error suppression for color-encoded digital fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Ma, S.; Zhu, R.; Quan, C.; Li, B.; Tay, C. J.; Chen, L.

    2012-04-01

    Color-encoded digital fringe projection profilometry (CDFPP) has the advantage of fast speed, non-contact and full-field testing. It is one of the most important dynamic three-dimensional (3D) profile measurement techniques. However, due to factors such as color cross-talk and gamma distortion of electro-optical devices, phase errors arise when conventional phase-shifting algorithms with fixed phase shift values are utilized to retrieve phases. In this paper, a simple and effective blind phase error suppression approach based on isotropic n-dimensional fringe pattern normalization (INFPN) and carrier squeezing interferometry (CSI) is proposed. It does not require pre-calibration for the gamma and color-coupling coefficients or the phase shift values. Simulation and experimental works show that our proposed approach is able to effectively suppress phase errors and achieve accurate measurement results in CDFPP.

  18. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  19. Continuous slope-area discharge records in Maricopa County, Arizona, 2004–2012

    USGS Publications Warehouse

    Wiele, Stephen M.; Heaton, John W.; Bunch, Claire E.; Gardner, David E.; Smith, Christopher F.

    2015-12-29

    Analyses of sources of errors and the impact stage data errors have on calculated discharge time series are considered, along with issues in data reduction. Steeper, longer stream reaches are generally less sensitive to measurement error. Other issues considered are pressure transducer drawdown, capture of flood peaks with discrete stage data, selection of stage record for development of rating curves, and minimum stages for the calculation of discharge.

  20. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  1. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua

    2018-04-01

    Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.

  2. Multi-Window Controllers for Autonomous Space Systems

    NASA Technical Reports Server (NTRS)

    Lurie, B, J.; Hadaegh, F. Y.

    1997-01-01

    Multi-window controllers select between elementary linear controllers using nonlinear windows based on the amplitude and frequency content of the feedback error. The controllers are relatively simple to implement and perform much better than linear controllers. The commanders for such controllers only order the destination point and are freed from generating the command time-profiles. The robotic missions rely heavily on the tasks of acquisition and tracking. For autonomous and optimal control of the spacecraft, the control bandwidth must be larger while the feedback can (and, therefore, must) be reduced.. Combining linear compensators via multi-window nonlinear summer guarantees minimum phase character of the combined transfer function. It is shown that the solution may require using several parallel branches and windows. Several examples of multi-window nonlinear controller applications are presented.

  3. Passive signal processing for a miniature Fabry-Perot interferometric sensor with a multimode laser-diode source

    NASA Astrophysics Data System (ADS)

    Ezbiri, A.; Tatam, R. P.

    1995-09-01

    A passive signal-processing technique for addressing a miniature low-finesse fiber Fabry-Perot interferometric sensor with a multimode laser diode is reported. Two modes of a multimode laser diode separated by 3 nm are used to obtain quadrature outputs from an \\similar 20 - mu m cavity. Wavelength-division demultiplexing combined with digital signal processing is used to recover the measurand-induced phase change. The technique is demonstrated for the measurement of vibration. The signal-to-noise ratio is \\similar 70 dB at 500 Hz for \\similar pi /2 rad displacement of the mirror, which results in a minimum detectable signal of \\similar 200 mu rad H z-1/2 . A quantitative discussion of miscalibration and systematic errors is presented.

  4. Quantum memory receiver for superadditive communication using binary coherent states

    NASA Astrophysics Data System (ADS)

    Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2016-11-01

    We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011, 106, 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.

  5. Quantum memory receiver for superadditive communication using binary coherent states.

    PubMed

    Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2016-11-12

    We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011 , 106 , 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.

  6. A transient-enhanced NMOS low dropout voltage regulator with parallel feedback compensation

    NASA Astrophysics Data System (ADS)

    Han, Wang; Lin, Tan

    2016-02-01

    This paper presents a transient-enhanced NMOS low-dropout regulator (LDO) for portable applications with parallel feedback compensation. The parallel feedback structure adds a dynamic zero to get an adequate phase margin with a load current variation from 0 to 1 A. A class-AB error amplifier and a fast charging/discharging unit are adopted to enhance the transient performance. The proposed LDO has been implemented in a 0.35 μm BCD process. From experimental results, the regulator can operate with a minimum dropout voltage of 150 mV at a maximum 1 A load and IQ of 165 μA. Under the full range load current step, the voltage undershoot and overshoot of the proposed LDO are reduced to 38 mV and 27 mV respectively.

  7. Monte Carlo simulation of errors in the anisotropy of magnetic susceptibility - A second-rank symmetric tensor. [for grains in sedimentary and volcanic rocks

    NASA Technical Reports Server (NTRS)

    Lienert, Barry R.

    1991-01-01

    Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.

  8. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  9. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  10. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  11. An enhanced computational method for age-at-death estimation based on the pubic symphysis using 3D laser scans and thin plate splines.

    PubMed

    Stoyanova, Detelina; Algee-Hewitt, Bridget F B; Slice, Dennis E

    2015-11-01

    The pubic symphysis is frequently used to estimate age-at-death from the adult skeleton. Assessment methods require the visual comparison of the bone morphology against age-informative characteristics that represent a series of phases. Age-at-death is then estimated from the age-range previously associated with the chosen phase. While easily executed, the "morphoscopic" process of feature-scoring and bone-to-phase-matching is known to be subjective. Studies of method and practitioner error demonstrate a need for alternative tools to quantify age-progressive change in the pubic symphysis. This article proposes a more objective, quantitative method that analyzes three-dimensional (3D) surface scans of the pubic symphysis using a thin plate spline algorithm (TPS). This algorithm models the bending of a flat plane to approximately match the surface of the bone and minimizes the bending energy required for this transformation. Known age-at-death and bending energy were used to construct a linear model to predict age from observed bending energy. This approach is tested with scans from 44 documented white male skeletons and 12 casts. The results of the surface analysis show a significant association (regression p-value = 0.0002 and coefficient of determination = 0.2270) between the minimum bending energy and age-at-death, with a root mean square error of ≈19 years. This TPS method yields estimates comparable to established methods but offers a fully integrated, objective and quantitative framework of analysis and has potential for use in archaeological and forensic casework. © 2015 Wiley Periodicals, Inc.

  12. Quadratic squeezing: An overview

    NASA Technical Reports Server (NTRS)

    Hillery, M.; Yu, D.; Bergou, J.

    1992-01-01

    The amplitude of the electric field of a mode of the electromagnetic field is not a fixed quantity: there are always quantum mechanical fluctuations. The amplitude, having both a magnitude and a phase, is a complex number and is described by the mode annihilation operator a. It is also possible to characterize the amplitude by its real and imaginary parts which correspond to the Hermitian and anti-Hermitian parts of a, X sub 1 = 1/2(a(sup +) + a) and X sub 2 = i/2(a(sup +) - a), respectively. These operators do not commute and, as a result, obey the uncertainty relation (h = 1) delta X sub 1(delta X sub 2) greater than or = 1/4. From this relation we see that the amplitude fluctuates within an 'error box' in the complex plane whose area is at least 1/4. Coherent states, among them the vacuum state, are minimum uncertainty states with delta X sub 1 = delta X sub 2 = 1/2. A squeezed state, squeezed in the X sub 1 direction, has the property that delta X sub 1 is less than 1/2. A squeezed state need not be a minimum uncertainty state, but those that are can be obtained by applying the squeeze operator.

  13. Short version of the Depression Anxiety Stress Scale-21: is it valid for Brazilian adolescents?

    PubMed Central

    da Silva, Hítalo Andrade; dos Passos, Muana Hiandra Pereira; de Oliveira, Valéria Mayaly Alves; Palmeira, Aline Cabral; Pitangui, Ana Carolina Rodarti; de Araújo, Rodrigo Cappato

    2016-01-01

    ABSTRACT Objective To evaluate the interday reproducibility, agreement and validity of the construct of short version of the Depression Anxiety Stress Scale-21 applied to adolescents. Methods The sample consisted of adolescents of both sexes, aged between 10 and 19 years, who were recruited from schools and sports centers. The validity of the construct was performed by exploratory factor analysis, and reliability was calculated for each construct using the intraclass correlation coefficient, standard error of measurement and the minimum detectable change. Results The factor analysis combining the items corresponding to anxiety and stress in a single factor, and depression in a second factor, showed a better match of all 21 items, with higher factor loadings in their respective constructs. The reproducibility values for depression were intraclass correlation coefficient with 0.86, standard error of measurement with 0.80, and minimum detectable change with 2.22; and, for anxiety/stress: intraclass correlation coefficient with 0.82, standard error of measurement with 1.80, and minimum detectable change with 4.99. Conclusion The short version of the Depression Anxiety Stress Scale-21 showed excellent values of reliability, and strong internal consistency. The two-factor model with condensation of the constructs anxiety and stress in a single factor was the most acceptable for the adolescent population. PMID:28076595

  14. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan

    PubMed Central

    2017-01-01

    Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395

  15. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    PubMed

    Najat, Dereen

    2017-01-01

    Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.

  16. An intersecting chord method for minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Zhang, Qing; Liang, Lin; Liu, Dan

    2015-11-01

    As one of the Geometrical Product Specifications that are widely applied in industrial manufacturing and measurement, sphericity error can synthetically scale a 3D structure and reflects the machining quality of a spherical workpiece. Following increasing demands in the high motion performance of spherical parts, sphericity error is becoming an indispensable component in the evaluation of form error. However, the evaluation of sphericity error is still considered to be a complex mathematical issue, and the related research studies on the development of available models are lacking. In this paper, an intersecting chord method is first proposed to solve the minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error. This new modelling method leverages chord relationships to replace the characteristic points, thereby significantly reducing the computational complexity and improving the computational efficiency. Using the intersecting chords to generate a virtual centre, the reference sphere in two concentric spheres is simplified as a space intersecting structure. The position of the virtual centre on the space intersecting structure is determined by characteristic chords, which may reduce the deviation between the virtual centre and the centre of the reference sphere. In addition,two experiments are used to verify the effectiveness of the proposed method with real datasets from the Cartesian coordinates. The results indicate that the estimated errors are in perfect agreement with those of the published methods. Meanwhile, the computational efficiency is improved. For the evaluation of the sphericity error, the use of high performance computing is a remarkable change.

  17. Accurate identification of microseismic P- and S-phase arrivals using the multi-step AIC algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Mengbo; Wang, Liguan; Liu, Xiaoming; Zhao, Jiaxuan; Peng, Ping'an

    2018-03-01

    Identification of P- and S-phase arrivals is the primary work in microseismic monitoring. In this study, a new multi-step AIC algorithm is proposed. This algorithm consists of P- and S-phase arrival pickers (P-picker and S-picker). The P-picker contains three steps: in step 1, a preliminary P-phase arrival window is determined by the waveform peak. Then a preliminary P-pick is identified using the AIC algorithm. Finally, the P-phase arrival window is narrowed based on the above P-pick. Thus the P-phase arrival can be identified accurately by using the AIC algorithm again. The S-picker contains five steps: in step 1, a narrow S-phase arrival window is determined based on the P-pick and the AIC curve of amplitude biquadratic time-series. In step 2, the S-picker automatically judges whether the S-phase arrival is clear to identify. In step 3 and 4, the AIC extreme points are extracted, and the relationship between the local minimum and the S-phase arrival is researched. In step 5, the S-phase arrival is picked based on the maximum probability criterion. To evaluate of the proposed algorithm, a P- and S-picks classification criterion is also established based on a source location numerical simulation. The field data tests show a considerable improvement of the multi-step AIC algorithm in comparison with the manual picks and the original AIC algorithm. Furthermore, the technique is independent of the kind of SNR. Even in the poor-quality signal group which the SNRs are below 5, the effective picking rates (the corresponding location error is <15 m) of P- and S-phase arrivals are still up to 80.9% and 76.4% respectively.

  18. Theta EEG dynamics of the error-related negativity.

    PubMed

    Trujillo, Logan T; Allen, John J B

    2007-03-01

    The error-related negativity (ERN) is a response-locked brain potential (ERP) occurring 80-100ms following response errors. This report contrasts three views of the genesis of the ERN, testing the classic view that time-locked phasic bursts give rise to the ERN against the view that the ERN arises from a pure phase-resetting of ongoing theta (4-7Hz) EEG activity and the view that the ERN is generated - at least in part - by a phase-resetting and amplitude enhancement of ongoing theta EEG activity. Time-domain ERP analyses were augmented with time-frequency investigations of phase-locked and non-phase-locked spectral power, and inter-trial phase coherence (ITPC) computed from individual EEG trials, examining time courses and scalp topographies. Simulations based on the assumptions of the classic, pure phase-resetting, and phase-resetting plus enhancement views, using parameters from each subject's empirical data, were used to contrast the time-frequency findings that could be expected if one or more of these hypotheses adequately modeled the data. Error responses produced larger amplitude activity than correct responses in time-domain ERPs immediately following responses, as expected. Time-frequency analyses revealed that significant error-related post-response increases in total spectral power (phase- and non-phase-locked), phase-locked power, and ITPC were primarily restricted to the theta range, with this effect located over midfrontocentral sites, with a temporal distribution from approximately 150-200ms prior to the button press and persisting up to 400ms post-button press. The increase in non-phase-locked power (total power minus phase-locked power) was larger than phase-locked power, indicating that the bulk of the theta event-related dynamics were not phase-locked to response. Results of the simulations revealed a good fit for data simulated according to the phase-locking with amplitude enhancement perspective, and a poor fit for data simulated according to the classic view and the pure phase-resetting view. Error responses produce not only phase-locked increases in theta EEG activity, but also increases in non-phase-locked theta, both of which share a similar topography. The findings are thus consistent with the notion advanced by Luu et al. [Luu P, Tucker DM, Makeig S. Frontal midline theta and the error-related negativity; neurophysiological mechanisms of action regulation. Clin Neurophysiol 2004;115:1821-35] that the ERN emerges, at least in part, from a phase-resetting and phase-locking of ongoing theta-band activity, in the context of a general increase in theta power following errors.

  19. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  20. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  1. A mathematical model of diurnal variations in human plasma melatonin levels

    NASA Technical Reports Server (NTRS)

    Brown, E. N.; Choe, Y.; Shanahan, T. L.; Czeisler, C. A.

    1997-01-01

    Studies in animals and humans suggest that the diurnal pattern in plasma melatonin levels is due to the hormone's rates of synthesis, circulatory infusion and clearance, circadian control of synthesis onset and offset, environmental lighting conditions, and error in the melatonin immunoassay. A two-dimensional linear differential equation model of the hormone is formulated and is used to analyze plasma melatonin levels in 18 normal healthy male subjects during a constant routine. Recently developed Bayesian statistical procedures are used to incorporate correctly the magnitude of the immunoassay error into the analysis. The estimated parameters [median (range)] were clearance half-life of 23.67 (14.79-59.93) min, synthesis onset time of 2206 (1940-0029), synthesis offset time of 0621 (0246-0817), and maximum N-acetyltransferase activity of 7.17(2.34-17.93) pmol x l(-1) x min(-1). All were in good agreement with values from previous reports. The difference between synthesis offset time and the phase of the core temperature minimum was 1 h 15 min (-4 h 38 min-2 h 43 min). The correlation between synthesis onset and the dim light melatonin onset was 0.93. Our model provides a more physiologically plausible estimate of the melatonin synthesis onset time than that given by the dim light melatonin onset and the first reliable means of estimating the phase of synthesis offset. Our analysis shows that the circadian and pharmacokinetics parameters of melatonin can be reliably estimated from a single model.

  2. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI

    PubMed Central

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233

  3. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI.

    PubMed

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.

  4. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  5. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  6. COMAP: a new computational interpretation of human movement planning level based on coordinated minimum angle jerk policies and six universal movement elements.

    PubMed

    Emadi Andani, Mehran; Bahrami, Fariba

    2012-10-01

    Flash and Hogan (1985) suggested that the CNS employs a minimum jerk strategy when planning any given movement. Later, Nakano et al. (1999) showed that minimum angle jerk predicts the actual arm trajectory curvature better than the minimum jerk model. Friedman and Flash (2009) confirmed this claim. Besides the behavioral support that we will discuss, we will show that this model allows simplicity in planning any given movement. In particular, we prove mathematically that each movement that satisfies the minimum joint angle jerk condition is reproducible by a linear combination of six functions. These functions are calculated independent of the type of the movement and are normalized in the time domain. Hence, we call these six universal functions the Movement Elements (ME). We also show that the kinematic information at the beginning and end of the movement determines the coefficients of the linear combination. On the other hand, in analyzing recorded data from sit-to-stand (STS) transfer, arm-reaching movement (ARM) and gait, we observed that minimum joint angle jerk condition is satisfied only during different successive phases of these movements and not for the entire movement. Driven by these observations, we assumed that any given ballistic movement may be decomposed into several successive phases without overlap, such that for each phase the minimum joint angle jerk condition is satisfied. At the boundaries of each phase the angular acceleration of each joint should obtain its extremum (zero third derivative). As a consequence, joint angles at each phase will be linear combinations of the introduced MEs. Coefficients of the linear combination at each phase are the values of the joint kinematics at the boundaries of that phase. Finally, we conclude that these observations may constitute the basis of a computational interpretation, put differently, of the strategy used by the Central Nervous System (CNS) for motor planning. We call this possible interpretation "Coordinated Minimum Angle jerk Policy" or COMAP. Based on this policy, the function of the CNS in generating the desired pattern of any given task (like STS, ARM or gait) can be described computationally using three factors: (1) the kinematics of the motor system at given body states, i.e., at certain movement events/instances, (2) the time length of each phase, and (3) the proposed MEs. From a computational point of view, this model significantly simplifies the processes of movement planning as well as feature abstraction for saving characterizing information of any given movement in memory. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A MIMO radar quadrature and multi-channel amplitude-phase error combined correction method based on cross-correlation

    NASA Astrophysics Data System (ADS)

    Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan

    2018-04-01

    Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.

  8. VizieR Online Data Catalog: AKARI IRC asteroid sample diameters & albedos (Ali-Lagoa+, 2018)

    NASA Astrophysics Data System (ADS)

    Ali-Lagoa, V.; Mueller, T. G.; Usui, F.; Hasegawa, S.

    2017-11-01

    Table 1 contains the best-fitting values of size and beaming parameter and corresponding visible geometric albedos for the full AKARI IRC sample. We fitted the near-Earth asteroid thermal model (NEATM) of Harris (1998Icar..131..291H) to the AKARI IRC thermal infrared data (Murakami et al., 2007PASJ...59S.369M, Onaka et al., 2007PASJ...59S.401O, Ishihara et al., 2010A&A...514A...1I, Cat. II/297, Usui et al., 2011PASJ...63.1117U, Cat. J/PASJ/63/1117, Takita et al., 2012PASJ...64..126T, Hasegawa et al., 2013PASJ...65...34H, Cat. J/PASJ/65/34). The NEATM implementation is described in Ali-Lagoa and Delbo' (2017A&A...603A..55A, cat. J/A+A/603/A55). Minimum relative errors of 10, 15, and 20 percent are given for size, beaming parameter and albedo in those cases where the beaming parameter could be fitted. Otherwise, a default value of the beaming parameter is assumed based on Eq. 1 in the article, and the minimum relative errors in size and albedo increase to 20 and 40 percent (see the discussions in Mainzer et al., 2011ApJ...736..100M, Ali-Lagoa et al., 2016A&A...591A..14A, Cat. J/A+A/591/A14). We also provide the asteroid absolute magnitudes and G12 slope parameters retrieved from Oszkiewicz et al. (2012), the number of observations used in each IRC band (S9W and L18W), plus the heliocentric and geocentric distances and phase angle (r, Delta, alpha) based on the ephemerides taken from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).

  9. Probablilistic evaluation of earthquake detection and location capability for Illinois, Indiana, Kentucky, Ohio, and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Christensen, D.H.

    1980-09-01

    Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less

  10. Free energy decomposition of protein-protein interactions.

    PubMed

    Noskov, S Y; Lim, C

    2001-08-01

    A free energy decomposition scheme has been developed and tested on antibody-antigen and protease-inhibitor binding for which accurate experimental structures were available for both free and bound proteins. Using the x-ray coordinates of the free and bound proteins, the absolute binding free energy was computed assuming additivity of three well-defined, physical processes: desolvation of the x-ray structures, isomerization of the x-ray conformation to a nearby local minimum in the gas-phase, and subsequent noncovalent complex formation in the gas phase. This free energy scheme, together with the Generalized Born model for computing the electrostatic solvation free energy, yielded binding free energies in remarkable agreement with experimental data. Two assumptions commonly used in theoretical treatments; viz., the rigid-binding approximation (which assumes no conformational change upon complexation) and the neglect of vdW interactions, were found to yield large errors in the binding free energy. Protein-protein vdW and electrostatic interactions between complementary surfaces over a relatively large area (1400--1700 A(2)) were found to drive antibody-antigen and protease-inhibitor binding.

  11. All the nonadiabatic (J=0) bound states of NO{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzgeber, R.F.; Mandelshtam, V.A.; Schlier, C.

    1999-02-01

    We calculated all 3170 A{sub 1} and B{sub 2} (J=0) vibronic bound states of the coupled electronic ground ({tilde X}&hthinsp;{sup 2}A{sub 1}) and the first excited ({tilde A}&hthinsp;{sup 2}B{sub 2}) surfaces of NO{sub 2}, using a modification of the {ital ab initio} potentials of Leonardi {ital et al.} [J. Chem. Phys. {bold 105}, 9051 (1996)]. The calculation was performed by harmonic inversion of the Chebyshev correlation function generated from a DVR Hamiltonian in Radau coordinates. The rms error of the eigenenergies is about 2.5 cm{sup {minus}1}, corresponding to a relative error of 10{sup {minus}4} near the dissociation energy. The resultsmore » are compared with the adiabatic and diabatic levels calculated from the same surfaces, with experimental data, and with some approximations for the number of states function N(E). The experimental levels are reproduced fairly well up to an energy of 12&hthinsp;000 cm{sup {minus}1} above the potential minimum while the total number of bound levels agrees to within 2{percent} with that calculated from the phase space volume. {copyright} {ital 1999 American Institute of Physics.}« less

  12. Sequence-dependent rotation axis changes and interaction torque use in overarm throwing.

    PubMed

    Hansen, Clint; Rezzoug, Nasser; Gorce, Philippe; Venture, Gentiane; Isableu, Brice

    2016-01-01

    We examined the role of rotation axes during an overarm throwing task. Participants performed such task and were asked to throw a ball at maximal velocity at a target. The purpose of this study was to examine whether the minimum inertia axis would be exploited during the throwing phases, a time when internal-external rotations of the shoulder are particularly important. A motion capture system was used to evaluate the performance and to compute the potential axes of rotation (minimum inertia axis, shoulder-centre of mass axis and the shoulder-elbow axis). More specifically, we investigated whether a velocity-dependent change in rotational axes can be observed in the different throwing phases and whether the control obeys the principle of minimum inertia resistance. Our results showed that the limbs' rotational axis mainly coincides with the minimum inertia axis during the cocking phase and with the shoulder-elbow axis during the acceleration phase. Besides these rotation axes changes, the use of interaction torque is also sequence-dependent. The sequence-dependent rotation axes changes associated with the use of interaction torque during the acceleration phase could be a key factor in the production of hand velocity at ball release.

  13. Robust adaptive cruise control of high speed trains.

    PubMed

    Faieghi, Mohammadreza; Jalali, Aliakbar; Mashhadi, Seyed Kamal-e-ddin Mousavi

    2014-03-01

    The cruise control problem of high speed trains in the presence of unknown parameters and external disturbances is considered. In particular a Lyapunov-based robust adaptive controller is presented to achieve asymptotic tracking and disturbance rejection. The system under consideration is nonlinear, MIMO and non-minimum phase. To deal with the limitations arising from the unstable zero-dynamics we do an output redefinition such that the zero-dynamics with respect to new outputs becomes stable. Rigorous stability analyses are presented which establish the boundedness of all the internal states and simultaneously asymptotic stability of the tracking error dynamics. The results are presented for two common configurations of high speed trains, i.e. the DD and PPD designs, based on the multi-body model and are verified by several numerical simulations. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Programmable Differential Delay Circuit With Fine Delay Adjustment

    DOEpatents

    DeRyckere, John F.; Jenkins, Philip Nord; Cornett, Frank Nolan

    2002-07-09

    Circuitry that provides additional delay to early arriving signals such that all data signals arrive at a receiving latch with same path delay. The delay of a forwarded clock reference is also controlled such that the capturing clock edge will be optimally positioned near quadrature (depending on latch setup/hold requirements). The circuitry continuously adapts to data and clock path delay changes and digital filtering of phase measurements reduce errors brought on by jittering data edges. The circuitry utilizes only the minimum amount of delay necessary to achieve objective thereby limiting any unintended jitter. Particularly, this programmable differential delay circuit with fine delay adjustment is designed to allow the skew between ASICS to be minimized. This includes skew between data bits, between data bits and clocks as well as minimizing the overall skew in a channel between ASICS.

  15. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  16. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  17. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  18. Changes in the relationship NAO-Northern hemisphere temperature due to solar activity

    NASA Astrophysics Data System (ADS)

    Gimeno, Luis; de la Torre, Laura; Nieto, Raquel; García, Ricardo; Hernández, Emiliano; Ribera, Pedro

    2003-01-01

    The influence of the North Atlantic Oscillation (NAO) on wintertime Northern Hemisphere Temperature (NHT) is investigated. The results suggest that this relationship has different sign according to the phase of the solar cycle. For solar maximum phases NAO and NHT are positively correlated - a result assumed up to the moment - but for solar minimum phases correlations are not significant or even negative. This result is in agreement with the different extension of the NAO for solar cycle phases [Kodera, Geophys. Res. Lett. 29 (2002) 14557-14560] - almost hemispheric for maximum phases and confined to the eastern Atlantic for minimum phases.

  19. Measurement uncertainty evaluation of conicity error inspected on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  20. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  1. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  2. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  3. 29 CFR 510.23 - Agricultural activities eligible for minimum wage phase-in.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Agricultural activities eligible for minimum wage phase-in. 510.23 Section 510.23 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION... FAIR LABOR STANDARDS ACT IN PUERTO RICO Classification of Industries § 510.23 Agricultural activities...

  4. Equalizing secondary path effects using the periodicity of fMRI acoustic noise.

    PubMed

    Kannan, Govind; Milani, Ali A; Panahi, Issa; Briggs, Richard

    2008-01-01

    Non-minimum phase secondary path has a direct effect on achieving a desired noise attenuation level in active noise control (ANC) systems. The adaptive noise canceling filter is often a causal FIR filter which may not be able to sufficiently equalize the effect of a non-minimum phase secondary path, since in theory only a non-causal filter can equalize it. However a non-causal stable filter can be found to equalize the non-minimum phase effect of secondary path. Realization of non-causal stable filters requires knowledge of future values of input signal. In this paper we develop methods for equalizing the non-minimum phase property of the secondary path and improving the performance of an ANC system by exploiting the periodicity of fMRI acoustic noise. It has been shown that the scanner noise component is highly periodic and hence predictable which enables easy realization of non-causal filtering. Improvement in performance due to the proposed methods (with and without the equalizer) is shown for periodic fMRI acoustic noise.

  5. A Dynamic Attitude Measurement System Based on LINS

    PubMed Central

    Li, Hanzhou; Pan, Quan; Wang, Xiaoxu; Zhang, Juanni; Li, Jiang; Jiang, Xiangjun

    2014-01-01

    A dynamic attitude measurement system (DAMS) is developed based on a laser inertial navigation system (LINS). Three factors of the dynamic attitude measurement error using LINS are analyzed: dynamic error, time synchronization and phase lag. An optimal coning errors compensation algorithm is used to reduce coning errors, and two-axis wobbling verification experiments are presented in the paper. The tests indicate that the attitude accuracy is improved 2-fold by the algorithm. In order to decrease coning errors further, the attitude updating frequency is improved from 200 Hz to 2000 Hz. At the same time, a novel finite impulse response (FIR) filter with three notches is designed to filter the dither frequency of the ring laser gyro (RLG). The comparison tests suggest that the new filter is five times more effective than the old one. The paper indicates that phase-frequency characteristics of FIR filter and first-order holder of navigation computer constitute the main sources of phase lag in LINS. A formula to calculate the LINS attitude phase lag is introduced in the paper. The expressions of dynamic attitude errors induced by phase lag are derived. The paper proposes a novel synchronization mechanism that is able to simultaneously solve the problems of dynamic test synchronization and phase compensation. A single-axis turntable and a laser interferometer are applied to verify the synchronization mechanism. The experiments results show that the theoretically calculated values of phase lag and attitude error induced by phase lag can both match perfectly with testing data. The block diagram of DAMS and physical photos are presented in the paper. The final experiments demonstrate that the real-time attitude measurement accuracy of DAMS can reach up to 20″ (1σ) and the synchronization error is less than 0.2 ms on the condition of three axes wobbling for 10 min. PMID:25177802

  6. Continued investigation of potential application of Omega navigation to civil aviation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1978-01-01

    Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.

  7. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  8. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  9. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  10. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  11. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  12. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  13. Carrier recovery methods for a dual-mode modem: A design approach

    NASA Technical Reports Server (NTRS)

    Richards, C. W.; Wilson, S. G.

    1984-01-01

    A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.

  14. Void Growth and Coalescence Simulations

    DTIC Science & Technology

    2013-08-01

    distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we

  15. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  16. Film thickness measurement based on nonlinear phase analysis using a Linnik microscopic white-light spectral interferometer.

    PubMed

    Guo, Tong; Chen, Zhuo; Li, Minghui; Wu, Juhong; Fu, Xing; Hu, Xiaotang

    2018-04-20

    Based on white-light spectral interferometry and the Linnik microscopic interference configuration, the nonlinear phase components of the spectral interferometric signal were analyzed for film thickness measurement. The spectral interferometric signal was obtained using a Linnik microscopic white-light spectral interferometer, which includes the nonlinear phase components associated with the effective thickness, the nonlinear phase error caused by the double-objective lens, and the nonlinear phase of the thin film itself. To determine the influence of the effective thickness, a wavelength-correction method was proposed that converts the effective thickness into a constant value; the nonlinear phase caused by the effective thickness can then be determined and subtracted from the total nonlinear phase. A method for the extraction of the nonlinear phase error caused by the double-objective lens was also proposed. Accurate thickness measurement of a thin film can be achieved by fitting the nonlinear phase of the thin film after removal of the nonlinear phase caused by the effective thickness and by the nonlinear phase error caused by the double-objective lens. The experimental results demonstrated that both the wavelength-correction method and the extraction method for the nonlinear phase error caused by the double-objective lens improve the accuracy of film thickness measurements.

  17. Gamma model and its analysis for phase measuring profilometry.

    PubMed

    Liu, Kai; Wang, Yongchang; Lau, Daniel L; Hao, Qi; Hassebrook, Laurence G

    2010-03-01

    Phase measuring profilometry is a method of structured light illumination whose three-dimensional reconstructions are susceptible to error from nonunitary gamma in the associated optical devices. While the effects of this distortion diminish with an increasing number of employed phase-shifted patterns, gamma distortion may be unavoidable in real-time systems where the number of projected patterns is limited by the presence of target motion. A mathematical model is developed for predicting the effects of nonunitary gamma on phase measuring profilometry, while also introducing an accurate gamma calibration method and two strategies for minimizing gamma's effect on phase determination. These phase correction strategies include phase corrections with and without gamma calibration. With the reduction in noise, for three-step phase measuring profilometry, analysis of the root mean squared error of the corrected phase will show a 60x reduction in phase error when the proposed gamma calibration is performed versus 33x reduction without calibration.

  18. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  19. Phase-demodulation error of a fiber-optic Fabry-Perot sensor with complex reflection coefficients.

    PubMed

    Kilpatrick, J M; MacPherson, W N; Barton, J S; Jones, J D

    2000-03-20

    The influence of reflector losses attracts little discussion in standard treatments of the Fabry-Perot interferometer yet may be an important factor contributing to errors in phase-stepped demodulation of fiber optic Fabry-Perot (FFP) sensors. We describe a general transfer function for FFP sensors with complex reflection coefficients and estimate systematic phase errors that arise when the asymmetry of the reflected fringe system is neglected, as is common in the literature. The measured asymmetric response of higher-finesse metal-dielectric FFP constructions corroborates a model that predicts systematic phase errors of 0.06 rad in three-step demodulation of a low-finesse FFP sensor (R = 0.05) with internal reflector losses of 25%.

  20. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.

    PubMed

    Gibson, Eli; Fenster, Aaron; Ward, Aaron D

    2013-10-01

    Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.

  1. How unprecedented a solar minimum was it?

    PubMed

    Russell, C T; Jian, L K; Luhmann, J G

    2013-05-01

    The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.

  2. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  3. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  4. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  5. The Lactate Minimum Test: Concept, Methodological Aspects and Insights for Future Investigations in Human and Animal Models

    PubMed Central

    Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.

    2017-01-01

    In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717

  6. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  7. Development of minimum state requirements for local growth management policies : phase 1.

    DOT National Transportation Integrated Search

    2015-01-01

    This research entailed the development of minimum requirements for local growth management policies for use in Louisiana. The purpose of developing minimum statewide standards is to try to alleviate some of the stress placed on state and local govern...

  8. Development of minimum state requirements for local growth management policies -- phase 1.

    DOT National Transportation Integrated Search

    2015-11-01

    This research entailed the development of minimum requirements for local growth management policies for use : in Louisiana. The purpose of developing minimum statewide standards is to try to alleviate some of the stress : placed on state and local go...

  9. Statistical aspects of the TNK-S2B trial of tenecteplase versus alteplase in acute ischemic stroke: an efficient, dose-adaptive, seamless phase II/III design.

    PubMed

    Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke

    2011-08-01

    TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.

  10. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    PubMed

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement therapy, data were missing for one patient because the CRRT device was not connected to the CIS. Automatic generation of minimum dataset and ICU quality indicators using ICU-DaMa is feasible. The discrepancies were identified and can be corrected by improving CIS ergonomics, training healthcare professionals in the culture of the quality of information, and using tools for detecting and correcting data errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  12. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  13. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  14. Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5

    NASA Astrophysics Data System (ADS)

    Ausati, Shadi; Amanollahi, Jamil

    2016-10-01

    Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.

  15. Modeling creep deformation of a two-phase TiAI/Ti3Al alloy with a lamellar microstructure

    NASA Astrophysics Data System (ADS)

    Bartholomeusz, Michael F.; Wert, John A.

    1994-10-01

    A two-phase TiAl/Ti3Al alloy with a lamellar microstructure has been previously shown to exhibit a lower minimum creep rate than the minimum creep rates of the constituent TiAl and Ti3Al single-phase alloys. Fiducial-line experiments described in the present article demonstrate that the creep rates of the constituent phases within the two-phase TiAl/Ti3Al lamellar alloy tested in compression are more than an order of magnitude lower than the creep rates of single-phase TiAl and Ti3Al alloys tested in compression at the same stress and temperature. Additionally, the fiducial-line experiments show that no interfacial sliding of the phases in the TiAl/Ti3Al lamellar alloy occurs during creep. The lower creep rate of the lamellar alloy is attributed to enhanced hardening of the constituent phases within the lamellar microstructure. A composite-strength model has been formulated to predict the creep rate of the lamellar alloy, taking into account the lower creep rates of the constituent phases within the lamellar micro-structure. Application of the model yields a very good correlation between predicted and experimentally observed minimum creep rates over moderate stress and temperature ranges.

  16. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  17. Temperature-fluctuation-sensitive accumulative effect of the phase measurement errors in low-coherence interferometry in characterizing arrayed waveguide gratings.

    PubMed

    Zhao, Changyun; Wei, Bing; Yang, Longzhi; Wang, Gencheng; Wang, Yuehai; Jiang, Xiaoqing; Li, Yubo; Yang, Jianyi

    2015-09-20

    We investigate the accumulative effect of the phase measurement errors in characterizing optical multipath components by low-coherence interferometry. The accumulative effect is caused by the fluctuation of the environment temperature, which leads to the variation of the refractive index of the device under test. The resulting phase measurement errors accumulate with the increasing of the phase difference between the two interferometer arms. Our experiments were carried out to demonstrate that the accumulative effect is still obvious even though the thermo-optical coefficient of the device under test is quite small. Shortening the measurement time to reduce the fluctuation of the environment temperature can effectively restrain the accumulative effect. The experiments show that when the scanning speed increases to 4.8 mm/s, the slope of the phase measurement errors decreases to 5.52×10(-8), which means the accumulative effect can be ignored.

  18. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  19. The Influence of Training Phase on Error of Measurement in Jump Performance.

    PubMed

    Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B

    2016-03-01

    The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.

  20. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  1. 14 CFR 29.1323 - Airspeed indicating system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... minimum practicable instrument calibration error when the corresponding pitot and static pressures are... pitot tube or an equivalent means of preventing malfunction due to icing. [Doc. No. 5084, 29 FR 16150...

  2. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  3. A further contribution to the seasonal variation of weighted mean temperature

    NASA Astrophysics Data System (ADS)

    Ding, Maohua; Hu, Wusheng

    2017-12-01

    The weighted mean temperature Tm is a variable parameter in the Global Navigation Satellite System (GNSS) meteorology and the Askne-Nordius zenith wet delay (ZWD) model. Some parameters about the Tm seasonal variation (e.g. the annual mean value, the annual range, the annual and semi-annual amplitudes, and the long-term trend) were discussed before. In this study, some additional results about the Tm seasonal variation on a global scale were found by using the Tm time series at 309 global radiosonde sites. Periodic signals of the annual and semi-annual variations were detected in these Tm time series by using the Lomb-Scargle periodogram. The annual variation is the main component of the periodic Tm in non-tropical regions, while the annual variation or the semiannual variation can be the main component of the periodic Tm in tropics. The mean annual Tm almost keeps constant with the increasing latitude in tropics, while it decreases with the increasing latitude in non-tropical regions. From a global perspective, Tm has an increasing trend of 0.22 K/decade on average, which may be caused by the global warming effects. The annual phase is almost found in about January for the non-tropical regions of the Southern Hemisphere and in about July for the non-tropical regions of the Northern Hemisphere, but it has no clear symmetry in tropics. Unlike the annual phase, the geographical distributions of semi-annual phase do not follow obvious rules. In non-tropical regions, the maximum and minimum Tm of the seasonal model are usually found in respective summer and winter days while the maximum and minimum Tm are distributed over a whole year but not in any fixed seasons for tropical regions. The seasonal model errors increase with the increasing value of annual amplitude. A primary reason for the irregular seasonal variation in tropics is that Tm has rather small variations in this region.

  4. Measurement and interpretation of crustal deformation rates associated with postglacial rebound

    NASA Technical Reports Server (NTRS)

    Davis, James L.

    1994-01-01

    Analysis of Global Positioning System (GPS) data from two sites separated by horizontal distance of only approximately 2.2 m yielded phase residuals exhibiting a systematic elevation angle dependence. One of the two GPS antennas was mounted on an approximately 1 m high concrete pillar, and the other was mounted on a standard wooden tripod. We performed elevation angle cutoff tests with these data, and established that the vertical coordinate of site position was sensitive to the minimum elevation angle (elevation cutoff) of the data analyzed. For example, the vertical coordinate of site position changed by 9.7 plus or minus 0.8 mm when the minimum elevation angle was increased from 10 to 25. We performed simulations based on a simple (ray tracing) multipath model with a single horizontal reflector, and demonstrated that the elevation angle cutoff test results and the pattern of the residual versus elevation angle could be qualitatively reproduced if the reflector were located 0.1-0.2 m beneath the antenna phase center. We therefore, hypothesized that the source of the elevation-angle-dependent error were multipath reflections and scattering and that the horizontal surface of the pillar, located a distance of approximately 0.2 m beneath the antenna phase center, was the primary reflector. We tested this hypothesis by placing microwave absorbing material between the antenna and the pillar in a number of configurations and analyzed the changes in apparent position of the antenna. The results indicate that (1) the horizontal surface of the pillar is indeed the main reflector, (2) both the concrete and the metal plate embedded in the pillar are significant reflectors, and (3) the reflection can be reduced to a great degree by the use of microwave absorbing materials. These results have significant implications for the accuracy of global GPS geodetic tracking networks which use pillar-antenna configuration identical or similar to the one used here (at the Westford WFRD GPS site).

  5. Test and evaluation of the Navy half-watt RTG. [Radioisotope Thermoelectric Generator

    NASA Technical Reports Server (NTRS)

    Rosell, F. E., Jr.; Lane, S. D.; Eggers, P. E.; Gawthrop, W. E.; Rouklove, P. G.; Truscello, V. C.

    1976-01-01

    The radioisotope thermoelectric generator (RTG) considered is to provide a continuous minimum power output of 0.5 watt at 6.0 to 8.5 volts for a minimum period of 15 years. The mechanical-electrical evaluation phase discussed involved the conduction of shock and vibration tests. The thermochemical-physical evaluation phase consisted of an analysis of the materials and the development of a thermal model. The thermoelectric evaluation phase included the accelerated testing of the thermoelectric modules.

  6. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  7. Using heuristic evaluations to assess the safety of health information systems.

    PubMed

    Carvalho, Christopher J; Borycki, Elizabeth M; Kushniruk, Andre W

    2009-01-01

    Health information systems (HISs) are typically seen as a mechanism for reducing medical errors. There is, however, evidence to prove that technology may actually be the cause of errors. As a result, it is crucial to fully test any system prior to its implementation. At present, evidence-based evaluation heuristics do not exist for assessing aspects of interface design that lead to medical errors. A three phase study was conducted to develop evidence-based heuristics for evaluating interfaces. Phase 1 consisted of a systematic review of the literature. In Phase 2 a comprehensive list of 33 evaluation heuristics was developed based on the review that could be used to test for potential technology induced errors. Phase 3 involved applying these healthcare specific heuristics to evaluate a HIS.

  8. Computer program to minimize prediction error in models from experiments with 16 hypercube points and 0 to 6 center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1982-01-01

    A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.

  9. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  10. Predictive momentum management for a space station measurement and computation requirements

    NASA Technical Reports Server (NTRS)

    Adams, John Carl

    1986-01-01

    An analysis is made of the effects of errors and uncertainties in the predicting of disturbance torques on the peak momentum buildup on a space station. Models of the disturbance torques acting on a space station in low Earth orbit are presented, to estimate how accurately they can be predicted. An analysis of the torque and momentum buildup about the pitch axis of the Dual Keel space station configuration is formulated, and a derivation of the Average Torque Equilibrium Attitude (ATEA) is presented, for the case of no MRMS (Mobile Remote Manipulation System) motion, Y vehicle axis MRMS motion, and Z vehicle axis MRMS motion. Results showed the peak momentum buildup to be approximately 20000 N-m-s and to be relatively insensitive to errors in the predicting torque models, for Z axis motion of the MRMS was found to vary significantly with model errors, but not exceed a value of approximately 15000 N-m-s for the Y axis MRMS motion with 1 deg attitude hold error. Minimum peak disturbance momentum was found not to occur at the ATEA angle, but at a slightly smaller angle. However, this minimum peak momentum attitude was found to produce significant disturbance momentum at the end of the predicting time interval.

  11. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  12. Cost-effectiveness of the stream-gaging program in North Carolina

    USGS Publications Warehouse

    Mason, R.R.; Jackson, N.M.

    1985-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.

  13. Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont

    USGS Publications Warehouse

    Smath, J.A.; Blackey, F.E.

    1986-01-01

    Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)

  14. Optimized graph-based mosaicking for virtual microscopy

    NASA Astrophysics Data System (ADS)

    Steckhan, Dirk G.; Wittenberg, Thomas

    2009-02-01

    Virtual microscopy has the potential to partially replace traditional microscopy. For virtualization, the slide is scanned once by a fully automatized robotic microscope and saved digitally. Typically, such a scan results in several hundreds to thousands of fields of view. Since robotic stages have positioning errors, these fields of view have to be registered locally and globally in an additional step. In this work we propose a new global mosaicking method for the creation of virtual slides based on sub-pixel exact phase correlation for local alignment in combination with Prim's minimum spanning tree algorithm for global alignment. Our algorithm allows for a robust reproduction of the original slide even in the presence of views with little to no information content. This makes it especially suitable for the mosaicking of cervical smears. These smears often exhibit large empty areas, which do not contain enough information for common stitching approaches.

  15. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  16. Force-Time Entropy of Isometric Impulse.

    PubMed

    Hsieh, Tsung-Yu; Newell, Karl M

    2016-01-01

    The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.

  17. Geological Carbon Sequestration: A New Approach for Near-Surface Assurance Monitoring

    PubMed Central

    Wielopolski, Lucian

    2011-01-01

    There are two distinct objectives in monitoring geological carbon sequestration (GCS): Deep monitoring of the reservoir’s integrity and plume movement and near-surface monitoring (NSM) to ensure public health and the safety of the environment. However, the minimum detection limits of the current instrumentation for NSM is too high for detecting weak signals that are embedded in the background levels of the natural variations, and the data obtained represents point measurements in space and time. A new approach for NSM, based on gamma-ray spectroscopy induced by inelastic neutron scatterings (INS), offers novel and unique characteristics providing the following: (1) High sensitivity with a reducible error of measurement and detection limits, and, (2) temporal- and spatial-integration of carbon in soil that results from underground CO2 seepage. Preliminary field results validated this approach showing carbon suppression of 14% in the first year and 7% in the second year. In addition the temporal behavior of the error propagation is presented and it is shown that for a signal at the level of the minimum detection level the error asymptotically approaches 47%. PMID:21556180

  18. 29 CFR Appendix C to Part 510 - Government Corporations Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Government Corporations Eligible for Minimum Wage Phase-In C Appendix C to Part 510 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION... FAIR LABOR STANDARDS ACT IN PUERTO RICO Pt. 510, App. C Appendix C to Part 510—Government Corporations...

  19. 29 CFR Appendix C to Part 510 - Government Corporations Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Government Corporations Eligible for Minimum Wage Phase-In C Appendix C to Part 510 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION... FAIR LABOR STANDARDS ACT IN PUERTO RICO Pt. 510, App. C Appendix C to Part 510—Government Corporations...

  20. 29 CFR Appendix D to Part 510 - Municipalities Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... other employees are subject to Tier 3. Municipalities which did not submit data are subject to Tier 1... June 1, 1990. If upon review it is determined that the municipality should have been subject to Tier 1... Minimum Wage Phase-In This appendix contains a listing of the municipalities in Puerto Rico and the tier...

  1. Interferometric phase measurement techniques for coherent beam combining

    NASA Astrophysics Data System (ADS)

    Antier, Marie; Bourderionnet, Jérôme; Larat, Christian; Lallier, Eric; Primot, Jérôme; Brignon, Arnaud

    2015-03-01

    Coherent beam combining of fiber amplifiers provides an attractive mean of reaching high power laser. In an interferometric phase measurement the beams issued for each fiber combined are imaged onto a sensor and interfere with a reference plane wave. This registration of interference patterns on a camera allows the measurement of the exact phase error of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Based on this technique, several architectures can be proposed to coherently combine a high number of fibers. The first one based on digital holography transfers directly the image of the camera to spatial light modulator (SLM). The generated hologram is used to compensate the phase errors induced by the amplifiers. This architecture has therefore a collective phase measurement and correction. Unlike previous digital holography technique, the probe beams measuring the phase errors between the fibers are co-propagating with the phase-locked signal beams. This architecture is compatible with the use of multi-stage isolated amplifying fibers. In that case, only 20 pixels per fiber on the SLM are needed to obtain a residual phase shift error below λ/10rms. The second proposed architecture calculates the correction applied to each fiber channel by tracking the relative position of the interference finges. In this case, a phase modulator is placed on each channel. In that configuration, only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20rms, which demonstrates the scalability of this concept.

  2. An active co-phasing imaging testbed with segmented mirrors

    NASA Astrophysics Data System (ADS)

    Zhao, Weirui; Cao, Genrui

    2011-06-01

    An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.

  3. Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0

    NASA Astrophysics Data System (ADS)

    Millán-Otoya, Jorge E.; Boettcher, Stefan

    2014-03-01

    Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3

  4. Analysis of nonuniformity in intron phase distribution.

    PubMed Central

    Fedorov, A; Suboch, G; Bujakov, M; Fedorova, L

    1992-01-01

    The distribution of different intron groups with respect to phases has been analyzed. It has been established that group II introns and nuclear introns have a minimum frequency of phase 2 introns. Since the phase of introns is an extremely conservative measure the observed minimum reflects evolutionary processes. A sample of all known, group I introns was too small to provide a valid characteristic of their phase distribution. The findings observed for the unequal distribution of phases cannot be explained solely on the basis of the mobile properties of introns. One of the most likely explanations for this nonuniformity in the intron phase distribution is the process of exon shuffling. It is proposed that group II introns originated at the early stages of evolution and were involved in the process of exon shuffling. PMID:1598214

  5. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  6. Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.

    PubMed

    Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro

    2018-05-28

    In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.

  7. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  8. Dual tasking negatively impacts obstacle avoidance abilities in post-stroke individuals with visuospatial neglect: Task complexity matters!

    PubMed

    Aravind, Gayatri; Lamontagne, Anouk

    2017-01-01

    Persons with perceptual-attentional deficits due to visuospatial neglect (VSN) after a stroke are at a risk of collisions while walking in the presence of moving obstacles. The attentional burden of performing a dual-task may further compromise their obstacle avoidance performance, putting them at a greater risk of collisions. The objective of this study was to compare the ability of persons with (VSN+) and without VSN (VSN-) to dual task while negotiating moving obstacles. Twenty-six stroke survivors (13 VSN+, 13 VSN-) were assessed on their ability to (a) negotiate moving obstacles while walking (locomotor single task); (b) perform a pitch-discrimination task (cognitive single task) and (c) simultaneously perform the walking and cognitive tasks (dual task). We compared the groups on locomotor (collision rates, minimum distance from obstacle and onset of strategies) and cognitive (error rates) outcomes. For both single and dual task walking, VSN+ individuals showed higher collision rates compared to VSN- individuals. Dual tasking caused deterioration of locomotor (more collisions, delayed onset and smaller minimum distances) and cognitive performances (higher error rate) in VSN+ individuals. Contrastingly, VSN- individuals maintained collision rates, increased minimum distance, but showed more cognitive errors, prioritizing their locomotor performance. Individuals with VSN demonstrate cognitive-locomotor interference under dual task conditions, which could severely compromise safety when ambulating in community environments and may explain the poor recovery of independent community ambulation in these individuals.

  9. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  10. Impact of Feedback on Three Phases of Performance Monitoring

    PubMed Central

    Appelgren, Alva; Penny, William; Bengtsson, Sara L

    2013-01-01

    We investigated if certain phases of performance monitoring show differential sensitivity to external feedback and thus rely on distinct mechanisms. The phases of interest were: the error phase (FE), the phase of the correct response after errors (FEC), and the phase of correct responses following corrects (FCC). We tested accuracy and reaction time (RT) on 12 conditions of a continuous-choice-response task; the 2-back task. External feedback was either presented or not in FE and FEC, and delivered on 0%, 20%, or 100% of FCC trials. The FCC20 was matched to FE and FEC in the number of sounds received so that we could investigate when external feedback was most valuable to the participants. We found that external feedback led to a reduction in accuracy when presented on all the correct responses. Moreover, RT was significantly reduced for FCC100, which in turn correlated with the accuracy reduction. Interestingly, the correct response after an error was particularly sensitive to external feedback since accuracy was reduced when external feedback was presented during this phase but not for FCC20. Notably, error-monitoring was not influenced by feedback-type. The results are in line with models suggesting that the internal error-monitoring system is sufficient in cognitively demanding tasks where performance is ∼ 80%, as well as theories stipulating that external feedback directs attention away from the task. Our data highlight the first correct response after an error as particularly sensitive to external feedback, suggesting that important consolidation of response strategy takes place here. PMID:24217138

  11. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  12. How to conduct External Quality Assessment Schemes for the pre-analytical phase?

    PubMed

    Kristensen, Gunn B B; Aakre, Kristin Moberg; Kristoffersen, Ann Helen; Sandberg, Sverre

    2014-01-01

    In laboratory medicine, several studies have described the most frequent errors in the different phases of the total testing process, and a large proportion of these errors occur in the pre-analytical phase. Schemes for registration of errors and subsequent feedback to the participants have been conducted for decades concerning the analytical phase by External Quality Assessment (EQA) organizations operating in most countries. The aim of the paper is to present an overview of different types of EQA schemes for the pre-analytical phase, and give examples of some existing schemes. So far, very few EQA organizations have focused on the pre-analytical phase, and most EQA organizations do not offer pre-analytical EQA schemes (EQAS). It is more difficult to perform and standardize pre-analytical EQAS and also, accreditation bodies do not ask the laboratories for results from such schemes. However, some ongoing EQA programs for the pre-analytical phase do exist, and some examples are given in this paper. The methods used can be divided into three different types; collecting information about pre-analytical laboratory procedures, circulating real samples to collect information about interferences that might affect the measurement procedure, or register actual laboratory errors and relate these to quality indicators. These three types have different focus and different challenges regarding implementation, and a combination of the three is probably necessary to be able to detect and monitor the wide range of errors occurring in the pre-analytical phase.

  13. Dynamic modeling and ascent flight control of Ares-I Crew Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Du, Wei

    This research focuses on dynamic modeling and ascent flight control of large flexible launch vehicles such as the Ares-I Crew Launch Vehicle (CLV). A complete set of six-degrees-of-freedom dynamic models of the Ares-I, incorporating its propulsion, aerodynamics, guidance and control, and structural flexibility, is developed. NASA's Ares-I reference model and the SAVANT Simulink-based program are utilized to develop a Matlab-based simulation and linearization tool for an independent validation of the performance and stability of the ascent flight control system of large flexible launch vehicles. A linearized state-space model as well as a non-minimum-phase transfer function model (which is typical for flexible vehicles with non-collocated actuators and sensors) are validated for ascent flight control design and analysis. This research also investigates fundamental principles of flight control analysis and design for launch vehicles, in particular the classical "drift-minimum" and "load-minimum" control principles. It is shown that an additional feedback of angle-of-attack can significantly improve overall performance and stability, especially in the presence of unexpected large wind disturbances. For a typical "non-collocated actuator and sensor" control problem for large flexible launch vehicles, non-minimum-phase filtering of "unstably interacting" bending modes is also shown to be effective. The uncertainty model of a flexible launch vehicle is derived. The robust stability of an ascent flight control system design, which directly controls the inertial attitude-error quaternion and also employs the non-minimum-phase filters, is verified by the framework of structured singular value (mu) analysis. Furthermore, nonlinear coupled dynamic simulation results are presented for a reference model of the Ares-I CLV as another validation of the feasibility of the ascent flight control system design. Another important issue for a single main engine launch vehicle is stability under mal-function of the roll control system. The roll motion of the Ares-I Crew Launch Vehicle under nominal flight conditions is actively stabilized by its roll control system employing thrusters. This dissertation describes the ascent flight control design problem of Ares-I in the event of disabled or failed roll control. A simple pitch/yaw control logic is developed for such a technically challenging problem by exploiting the inherent versatility of a quaternion-based attitude control system. The proposed scheme requires only the desired inertial attitude quaternion to be re-computed using the actual uncontrolled roll angle information to achieve an ascent flight trajectory identical to the nominal flight case with active roll control. Another approach that utilizes a simple adjustment of the proportional-derivative gains of the quaternion-based flight control system without active roll control is also presented. This approach doesn't require the re-computation of desired inertial attitude quaternion. A linear stability criterion is developed for proper adjustments of attitude and rate gains. The linear stability analysis results are validated by nonlinear simulations of the ascent flight phase. However, the first approach, requiring a simple modification of the desired attitude quaternion, is recommended for the Ares-I as well as other launch vehicles in the event of no active roll control. Finally, the method derived to stabilize a large flexible launch vehicle in the event of uncontrolled roll drift is generalized as a modified attitude quaternion feedback law. It is used to stabilize an axisymmetric rigid body by two independent control torques.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, E.

    Application of simple adaptive control (SAC) theory to the design of guidance and control systems for winged re-entry vehicles has been proven successful. To apply SAC to these non-linear and non-stationary systems, it needs to be Almost Strictly Passive (ASP), which is an extension of the Almost Strictly Positive Real (ASPR) condition for linear, time-invariant systems. To fulfill the ASP condition, the controlled, non-linear system has to be minimum-phase (i.e., the zero dynamics is stable), and there is a specific condition for the product of output and input matrix. Earlier studies indicate that even the linearised system is not ASPR.more » The two problems at hand are: 1) the system is non-minimum phase when flying with zero bank angle, and 2) whenever there is hybrid control, e.g., yaw control is established by combined reaction and aerodynamic control for the major part of flight, the second ASPR condition cannot be met. In this paper we look at both issues, the former related to the guidance system and the latter to the attitude-control system. It is concluded that whenever the nominal bank angle is zero, the passivity conditions can never be met, and guidance should be based on nominal commands and a redefinition of those whenever the error becomes too large. For the remaining part of the trajectory, the passivity conditions are marginally met, but it is proposed to add feedforward compensators to alleviate these conditions. The issue of hybrid control is avoided by redefining the controls with total control moments and adding a so-called control allocator. Deriving the passivity conditions for rotational motion, and evaluating these conditions along the trajectory shows that the (non-linear) winged entry vehicle is ASP. The sufficient conditions to apply SAC for attitude control are thus met.« less

  15. Effects of Random Circuit Fabrication Errors on Small Signal Gain and on Output Phase In a Traveling Wave Tube

    NASA Astrophysics Data System (ADS)

    Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.

    2011-10-01

    Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.

  16. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  18. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  19. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  20. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  1. Minimum energy control for in vitro neurons.

    PubMed

    Nabi, Ali; Stigen, Tyler; Moehlis, Jeff; Netoff, Theoden

    2013-06-01

    To demonstrate the applicability of optimal control theory for designing minimum energy charge-balanced input waveforms for single periodically-firing in vitro neurons from brain slices of Long-Evans rats. The method of control uses the phase model of a neuron and does not require prior knowledge of the neuron's biological details. The phase model of a neuron is a one-dimensional model that is characterized by the neuron's phase response curve (PRC), a sensitivity measure of the neuron to a stimulus applied at different points in its firing cycle. The PRC for each neuron is experimentally obtained by measuring the shift in phase due to a short-duration pulse injected into the periodically-firing neuron at various phase values. Based on the measured PRC, continuous-time, charge-balanced, minimum energy control waveforms have been designed to regulate the next firing time of the neuron upon application at the onset of an action potential. The designed waveforms can achieve the inter-spike-interval regulation for in vitro neurons with energy levels that are lower than those of conventional monophasic pulsatile inputs of past studies by at least an order of magnitude. They also provide the advantage of being charge-balanced. The energy efficiency of these waveforms is also shown by performing several supporting simulations that compare the performance of the designed waveforms against that of phase shuffled surrogate inputs, variants of the minimum energy waveforms obtained from suboptimal PRCs, as well as pulsatile stimuli that are applied at the point of maximum PRC. It was found that the minimum energy waveforms perform better than all other stimuli both in terms of control and in the amount of energy used. Specifically, it was seen that these charge-balanced waveforms use at least an order of magnitude less energy than conventional monophasic pulsatile stimuli. The significance of this work is that it uses concepts from the theory of optimal control and introduces a novel approach in designing minimum energy charge-balanced input waveforms for neurons that are robust to noise and implementable in electrophysiological experiments.

  2. Minimum energy control for in vitro neurons

    NASA Astrophysics Data System (ADS)

    Nabi, Ali; Stigen, Tyler; Moehlis, Jeff; Netoff, Theoden

    2013-06-01

    Objective. To demonstrate the applicability of optimal control theory for designing minimum energy charge-balanced input waveforms for single periodically-firing in vitro neurons from brain slices of Long-Evans rats. Approach. The method of control uses the phase model of a neuron and does not require prior knowledge of the neuron’s biological details. The phase model of a neuron is a one-dimensional model that is characterized by the neuron’s phase response curve (PRC), a sensitivity measure of the neuron to a stimulus applied at different points in its firing cycle. The PRC for each neuron is experimentally obtained by measuring the shift in phase due to a short-duration pulse injected into the periodically-firing neuron at various phase values. Based on the measured PRC, continuous-time, charge-balanced, minimum energy control waveforms have been designed to regulate the next firing time of the neuron upon application at the onset of an action potential. Main result. The designed waveforms can achieve the inter-spike-interval regulation for in vitro neurons with energy levels that are lower than those of conventional monophasic pulsatile inputs of past studies by at least an order of magnitude. They also provide the advantage of being charge-balanced. The energy efficiency of these waveforms is also shown by performing several supporting simulations that compare the performance of the designed waveforms against that of phase shuffled surrogate inputs, variants of the minimum energy waveforms obtained from suboptimal PRCs, as well as pulsatile stimuli that are applied at the point of maximum PRC. It was found that the minimum energy waveforms perform better than all other stimuli both in terms of control and in the amount of energy used. Specifically, it was seen that these charge-balanced waveforms use at least an order of magnitude less energy than conventional monophasic pulsatile stimuli. Significance. The significance of this work is that it uses concepts from the theory of optimal control and introduces a novel approach in designing minimum energy charge-balanced input waveforms for neurons that are robust to noise and implementable in electrophysiological experiments.

  3. Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.

    2017-08-01

    The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.

  4. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    PubMed

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.

  5. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    PubMed

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated medication doses to less than the global standard; as a result, it enhances patient safety. However, we hope other studies will be made later in hospitals to practically evaluate how much effective our proposed systematic strategy really is in comparison with other suggested remedies in this field.

  6. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers

    PubMed Central

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-01-01

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581

  7. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  8. Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2015-03-01

    We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.

  9. Sequence-dependent rotation axis changes in tennis.

    PubMed

    Hansen, Clint; Martin, Caroline; Rezzoug, Nasser; Gorce, Philippe; Bideau, Benoit; Isableu, Brice

    2017-09-01

    The purpose of this study was to evaluate the role of rotation axes during a tennis serve. A motion capture system was used to evaluate the contribution of the potential axes of rotation (minimum inertia axis, shoulder-centre of mass axis and the shoulder-elbow axis) during the four discrete tennis serve phases (loading, cocking, acceleration and follow through). Ten ranked athletes (International Tennis Number 1-3) repeatedly performed a flat service aiming at a target on the other side of the net. The four serve phases are distinct and thus, each movement phase seems to be organised around specific rotation axes. The results showed that the limbs' rotational axis does not necessarily coincide with the minimum inertia axis across the cocking phase of the tennis serve. Even though individual serving strategies were exposed, all participants showed an effect due to the cocking phase and changed the rotation axis during the task. Taken together, the results showed that despite inter-individual differences, nine out of 10 participants changed the rotation axis towards the minimum inertia and/or the mass axis in an endeavour to maximise external rotation of the shoulder to optimally prepare for the acceleration phase.

  10. An Advanced N -body Model for Interacting Multiple Stellar Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brož, Miroslav

    We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal,more » a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).« less

  11. Self-referenced locking of optical coherence by single-detector electronic-frequency tagging

    NASA Astrophysics Data System (ADS)

    Shay, T. M.; Benham, Vincent; Spring, Justin; Ward, Benjamin; Ghebremichael, F.; Culpepper, Mark A.; Sanchez, Anthony D.; Baker, J. T.; Pilkington, D.; Berdine, Richard

    2006-02-01

    We report a novel coherent beam combining technique. This is the first actively phase locked optical fiber array that eliminates the need for a separate reference beam. In addition, only a single photodetector is required. The far-field central spot of the array is imaged onto the photodetector to produce the phase control loop signals. Each leg of the fiber array is phase modulated with a separate RF frequency, thus tagging the optical phase shift for each leg by a separate RF frequency. The optical phase errors for the individual array legs are separated in the electronic domain. In contrast with the previous active phase locking techniques, in our system the reference beam is spatially overlapped with all the RF modulated fiber leg beams onto a single detector. The phase shift between the optical wave in the reference leg and in the RF modulated legs is measured separately in the electronic domain and the phase error signal is feedback to the LiNbO 3 phase modulator for that leg to minimize the phase error for that leg relative to the reference leg. The advantages of this technique are 1) the elimination of the reference beam and beam combination optics and 2) the electronic separation of the phase error signals without any degradation of the phase locking accuracy. We will present the first theoretical model for self-referenced LOCSET and describe experimental results for a 3 x 3 array.

  12. Blood pool and tissue phase patient motion effects on 82rubidium PET myocardial blood flow quantification.

    PubMed

    Lee, Benjamin C; Moody, Jonathan B; Poitrasson-Rivière, Alexis; Melvin, Amanda C; Weinberg, Richard L; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2018-03-23

    Patient motion can lead to misalignment of left ventricular volumes of interest and subsequently inaccurate quantification of myocardial blood flow (MBF) and flow reserve (MFR) from dynamic PET myocardial perfusion images. We aimed to identify the prevalence of patient motion in both blood and tissue phases and analyze the effects of this motion on MBF and MFR estimates. We selected 225 consecutive patients that underwent dynamic stress/rest rubidium-82 chloride ( 82 Rb) PET imaging. Dynamic image series were iteratively reconstructed with 5- to 10-second frame durations over the first 2 minutes for the blood phase and 10 to 80 seconds for the tissue phase. Motion shifts were assessed by 3 physician readers from the dynamic series and analyzed for frequency, magnitude, time, and direction of motion. The effects of this motion isolated in time, direction, and magnitude on global and regional MBF and MFR estimates were evaluated. Flow estimates derived from the motion corrected images were used as the error references. Mild to moderate motion (5-15 mm) was most prominent in the blood phase in 63% and 44% of the stress and rest studies, respectively. This motion was observed with frequencies of 75% in the septal and inferior directions for stress and 44% in the septal direction for rest. Images with blood phase isolated motion had mean global MBF and MFR errors of 2%-5%. Isolating blood phase motion in the inferior direction resulted in mean MBF and MFR errors of 29%-44% in the RCA territory. Flow errors due to tissue phase isolated motion were within 1%. Patient motion was most prevalent in the blood phase and MBF and MFR errors increased most substantially with motion in the inferior direction. Motion correction focused on these motions is needed to reduce MBF and MFR errors.

  13. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  14. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  15. Precision electronic speed controller for an alternating-current motor

    DOEpatents

    Bolie, V.W.

    A high precision controller for an alternating-current multi-phase electrical motor that is subject to a large inertial load. The controller was developed for controlling, in a neutron chopper system, a heavy spinning rotor that must be rotated in phase-locked synchronism with a reference pulse train that is representative of an ac power supply signal having a meandering line frequency. The controller includes a shaft revolution sensor which provides a feedback pulse train representative of the actual speed of the motor. An internal digital timing signal generator provides a reference signal which is compared with the feedback signal in a computing unit to provide a motor control signal. The motor control signal is a weighted linear sum of a speed error voltage, a phase error voltage, and a drift error voltage, each of which is computed anew with each revolution of the motor shaft. The speed error signal is generated by a novel vernier-logic circuit which is drift-free and highly sensitive to small speed changes. The phase error is also computed by digital logic, with adjustable sensitivity around a 0 mid-scale value. The drift error signal, generated by long-term counting of the phase error, is used to compensate for any slow changes in the average friction drag on the motor. An auxillary drift-byte status sensor prevents any disruptive overflow or underflow of the drift-error counter. An adjustable clocked-delay unit is inserted between the controller and the source of the reference pulse train to permit phase alignment of the rotor to any desired offset angle. The stator windings of the motor are driven by two amplifiers which are provided with input signals having the proper quadrature relationship by an exciter unit consisting of a voltage controlled oscillator, a binary counter, a pair of read-only memories, and a pair of digital-to-analog converters.

  16. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCulloch, M; Cazoulat, G; Polan, D

    Purpose: It is well documented that the delivered dose to patients undergoing radiotherapy (RT) is often different from the planned dose due to geometric variability and uncertainties in patient positioning. Recent work suggests that accumulated dose to the GTV is a better predictor of progression compared to the minimum planned dose to the PTV. The purpose of this study is to evaluate if deviations from the planned dose can contributed to tumor progression. Methods: From 2010 to 2014 an in-house Phase II clinical trial of adaptive stereotactic body RT was completed. Of the 90 patients enrolled, 7 patients had amore » local recurrence defined on contrast enhanced CT or MR imaging 3–21 months after completion of RT. Retrospective dose accumulation was performed using a biomechanical model-based deformable image registration algorithm (DIR) to accumulate the dose based on the kV CBCT acquired prior to each fraction for soft tissue alignment of the patient. The DIR algorithm was previously validated for geometric accuracy in the liver (target registration error = 2.0 mm) and dose accumulation in a homogeneous image, similar to a liver CBCT (gamma index = 91%). Following dose accumulation, the minimum dose to 0.5 cc of the GTV was compared between the planned and accumulated dose. Work is ongoing to evaluate the tumor control probability based on the planned and accumulated dose. Results: DIR and dose accumulation was performed on all fractions for 6 patients with local recurrence. The difference in minimum dose to 0.5 cc of the GTV ranged from −0.3–2.3 Gy over 3–5 fractions. One patient had a potentially significant difference in minimum dose of 2.3 Gy. Conclusion: Dose accumulation can reveal tumor underdosage, improving our ability to understand recurrence and tumor progression patterns, and could aid in adaptive re-planning during therapy to correct for this. This work was supported in part by NIH P01CA059827.« less

  18. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  19. A description of medication errors reported by pharmacists in a neonatal intensive care unit.

    PubMed

    Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila

    2017-02-01

    Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.

  20. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors

    PubMed Central

    Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin

    2011-01-01

    Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875

  1. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  2. Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly

    PubMed Central

    Yao, Jian; Levine, Judah; Weiss, Marc

    2015-01-01

    The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451

  3. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  4. 78 FR 17155 - Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.

  5. Correction of phase errors in quantitative water-fat imaging using a monopolar time-interleaved multi-echo gradient echo sequence.

    PubMed

    Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C

    2017-09-01

    To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  6. An all digital phase locked loop for synchronization of a sinusoidal signal embedded in white Gaussian noise

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1973-01-01

    An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.

  7. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  8. Multimodal pressure-flow method to assess dynamics of cerebral autoregulation in stroke and hypertension.

    PubMed

    Novak, Vera; Yang, Albert C C; Lepicovsky, Lukas; Goldberger, Ary L; Lipsitz, Lewis A; Peng, Chung-Kang

    2004-10-25

    This study evaluated the effects of stroke on regulation of cerebral blood flow in response to fluctuations in systemic blood pressure (BP). The autoregulatory dynamics are difficult to assess because of the nonstationarity and nonlinearity of the component signals. We studied 15 normotensive, 20 hypertensive and 15 minor stroke subjects (48.0 +/- 1.3 years). BP and blood flow velocities (BFV) from middle cerebral arteries (MCA) were measured during the Valsalva maneuver (VM) using transcranial Doppler ultrasound. A new technique, multimodal pressure-flow analysis (MMPF), was implemented to analyze these short, nonstationary signals. MMPF analysis decomposes complex BP and BFV signals into multiple empirical modes, representing their instantaneous frequency-amplitude modulation. The empirical mode corresponding to the VM BP profile was used to construct the continuous phase diagram and to identify the minimum and maximum values from the residual BP (BPR) and BFV (BFVR) signals. The BP-BFV phase shift was calculated as the difference between the phase corresponding to the BPR and BFVR minimum (maximum) values. BP-BFV phase shifts were significantly different between groups. In the normotensive group, the BFVR minimum and maximum preceded the BPR minimum and maximum, respectively, leading to large positive values of BP-BFV shifts. In the stroke and hypertensive groups, the resulting BP-BFV phase shift was significantly smaller compared to the normotensive group. A standard autoregulation index did not differentiate the groups. The MMPF method enables evaluation of autoregulatory dynamics based on instantaneous BP-BFV phase analysis. Regulation of BP-BFV dynamics is altered with hypertension and after stroke, rendering blood flow dependent on blood pressure.

  9. Steady-state phase error for a phase-locked loop subjected to periodic Doppler inputs

    NASA Technical Reports Server (NTRS)

    Chen, C.-C.; Win, M. Z.

    1991-01-01

    The performance of a carrier phase locked loop (PLL) driven by a periodic Doppler input is studied. By expanding the Doppler input into a Fourier series and applying the linearized PLL approximations, it is easy to show that, for periodic frequency disturbances, the resulting steady state phase error is also periodic. Compared to the method of expanding frequency excursion into a power series, the Fourier expansion method can be used to predict the maximum phase error excursion for a periodic Doppler input. For systems with a large Doppler rate fluctuation, such as an optical transponder aboard an Earth orbiting spacecraft, the method can be applied to test whether a lower order tracking loop can provide satisfactory tracking and thereby save the effect of a higher order loop design.

  10. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  11. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    PubMed

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Pulsed laser-based optical frequency comb generator for high capacity wavelength division multiplexed passive optical network supporting 1.2 Tbps

    NASA Astrophysics Data System (ADS)

    Ullah, Rahat; Liu, Bo; Zhang, Qi; Saad Khan, Muhammad; Ahmad, Ibrar; Ali, Amjad; Khan, Razaullah; Tian, Qinghua; Yan, Cheng; Xin, Xiangjun

    2016-09-01

    An architecture for flattened and broad spectrum multicarriers is presented by generating 60 comb lines from pulsed laser driven by user-defined bit stream in cascade with three modulators. The proposed scheme is a cost-effective architecture for optical line terminal (OLT) in wavelength division multiplexed passive optical network (WDM-PON) system. The optical frequency comb generator consists of a pulsed laser in cascade with a phase modulator and two Mach-Zehnder modulators driven by an RF source incorporating no phase shifter, filter, or electrical amplifier. Optical frequency comb generation is deployed in the simulation environment at OLT in WDM-PON system supports 1.2-Tbps data rate. With 10-GHz frequency spacing, each frequency tone carries data signal of 20 Gbps-based differential quadrature phase shift keying (DQPSK) in downlink transmission. We adopt DQPSK-based modulation technique in the downlink transmission because it supports 2 bits per symbol, which increases the data rate in WDM-PON system. Furthermore, DQPSK format is tolerant to different types of dispersions and has a high spectral efficiency with less complex configurations. Part of the downlink power is utilized in the uplink transmission; the uplink transmission is based on intensity modulated on-off keying. Minimum power penalties have been observed with excellent eye diagrams and other transmission performances at specified bit error rates.

  13. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  14. Spatial and temporal variability of the overall error of National Atmospheric Deposition Program measurements determined by the USGS collocated-sampler program, water years 1989-2001

    USGS Publications Warehouse

    Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.

    2005-01-01

    Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.

  15. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  16. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  17. Minimum savings requirements in shared savings provider payment.

    PubMed

    Pope, Gregory C; Kautter, John

    2012-11-01

    Payer (insurer) sharing of savings is a way of motivating providers of medical services to reduce cost growth. A Medicare shared savings program is established for accountable care organizations in the 2010 Patient Protection and Affordable Care Act. However, savings created by providers cannot be distinguished from the normal (random) variation in medical claims costs, setting up a classic principal-agent problem. To lessen the likelihood of paying undeserved bonuses, payers may pay bonuses only if observed savings exceed minimum levels. We study the trade-off between two types of errors in setting minimum savings requirements: paying bonuses when providers do not create savings and not paying bonuses when providers create savings. Copyright © 2011 John Wiley & Sons, Ltd.

  18. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  19. Bladder cancer diagnosis with CT urography: test characteristics and reasons for false-positive and false-negative results.

    PubMed

    Trinh, Tony W; Glazer, Daniel I; Sadow, Cheryl A; Sahni, V Anik; Geller, Nina L; Silverman, Stuart G

    2018-03-01

    To determine test characteristics of CT urography for detecting bladder cancer in patients with hematuria and those undergoing surveillance, and to analyze reasons for false-positive and false-negative results. A HIPAA-compliant, IRB-approved retrospective review of reports from 1623 CT urograms between 10/2010 and 12/31/2013 was performed. 710 examinations for hematuria or bladder cancer history were compared to cystoscopy performed within 6 months. Reference standard was surgical pathology or 1-year minimum clinical follow-up. False-positive and false-negative examinations were reviewed to determine reasons for errors. Ninety-five bladder cancers were detected. CT urography accuracy: was 91.5% (650/710), sensitivity 86.3% (82/95), specificity 92.4% (568/615), positive predictive value 63.6% (82/129), and negative predictive value was 97.8% (568/581). Of 43 false positives, the majority of interpretation errors were due to benign prostatic hyperplasia (n = 12), trabeculated bladder (n = 9), and treatment changes (n = 8). Other causes include blood clots, mistaken normal anatomy, infectious/inflammatory changes, or had no cystoscopic correlate. Of 13 false negatives, 11 were due to technique, one to a large urinary residual, one to artifact. There were no errors in perception. CT urography is an accurate test for diagnosing bladder cancer; however, in protocols relying predominantly on excretory phase images, overall sensitivity remains insufficient to obviate cystoscopy. Awareness of bladder cancer mimics may reduce false-positive results. Improvements in CTU technique may reduce false-negative results.

  20. Optimal threshold of error decision related to non-uniform phase distribution QAM signals generated from MZM based on OCS

    NASA Astrophysics Data System (ADS)

    Han, Xifeng; Zhou, Wen

    2018-03-01

    Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.

  1. Purification of Logic-Qubit Entanglement.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-07-05

    Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network.

  2. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  3. Results on the spatial resolution of repetitive transcranial magnetic stimulation for cortical language mapping during object naming in healthy subjects.

    PubMed

    Sollmann, Nico; Hauck, Theresa; Tussis, Lorena; Ille, Sebastian; Maurer, Stefanie; Boeckh-Behrens, Tobias; Ringel, Florian; Meyer, Bernhard; Krieg, Sandro M

    2016-10-24

    The spatial resolution of repetitive navigated transcranial magnetic stimulation (rTMS) for language mapping is largely unknown. Thus, to determine a minimum spatial resolution of rTMS for language mapping, we evaluated the mapping sessions derived from 19 healthy volunteers for cortical hotspots of no-response errors. Then, the distances between hotspots (stimulation points with a high error rate) and adjacent mapping points (stimulation points with low error rates) were evaluated. Mean distance values of 13.8 ± 6.4 mm (from hotspots to ventral points, range 0.7-30.7 mm), 10.8 ± 4.8 mm (from hotspots to dorsal points, range 2.0-26.5 mm), 16.6 ± 4.8 mm (from hotspots to apical points, range 0.9-27.5 mm), and 13.8 ± 4.3 mm (from hotspots to caudal points, range 2.0-24.2 mm) were measured. According to the results, the minimum spatial resolution of rTMS should principally allow for the identification of a particular gyrus, and according to the literature, it is in good accordance with the spatial resolution of direct cortical stimulation (DCS). Since measurement was performed between hotspots and adjacent mapping points and not on a finer-grained basis, we only refer to a minimum spatial resolution. Furthermore, refinement of our results within the scope of a prospective study combining rTMS and DCS for resolution measurement during language mapping should be the next step.

  4. Cost effective stream-gaging strategies for the Lower Colorado River basin; the Blythe field office operations

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, Edward J.

    1980-01-01

    This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)

  5. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications

    PubMed Central

    Kim, Changhwa; Shin, DongHyun

    2017-01-01

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss. PMID:28498312

  6. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications.

    PubMed

    Kim, Changhwa; Shin, DongHyun

    2017-05-12

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

  7. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  8. Macular Ganglion Cell and Retinal Nerve Fiber Layer Thickness in Children With Refractive Errors-An Optical Coherence Tomography Study.

    PubMed

    Goh, Jody P; Koh, Victor; Chan, Yiong Huak; Ngo, Cheryl

    2017-07-01

    To study the distribution of macular ganglion cell-inner plexiform layer (GC-IPL) thickness and peripapillary retinal nerve fiber layer (RNFL) thickness in children with refractive errors. Two hundred forty-three healthy eyes from 139 children with refractive error ranging from -10.00 to +5.00 D were recruited from the National University Hospital Eye Surgery outpatient clinic. After a comprehensive ocular examination, refraction, and axial length (AL) measurement (IOLMaster), macular GC-IPL and RNFL thickness values were obtained with a spectral domain Cirrus high definition optical coherence tomography system (Carl Zeiss Meditec Inc.). Only scans with signal strength of >6/10 were included. Correlation between variables was calculated using the Pearson correlation coefficient. A multivariate analysis using mixed models was done to adjust for confounders. The mean spherical equivalent refraction was -3.20±3.51 D and mean AL was 24.39±1.72 mm. Average, minimum, superior, and inferior GC-IPL were 82.59±6.29, 77.17±9.65, 83.68±6.96, and 81.64±6.70 μm, respectively. Average, superior, and inferior peripapillary RNFL were 99.00±11.45, 123.20±25.81, and 124.24±22.23 μm, respectively. Average, superior, and inferior GC-IPL were correlated with AL (β=-2.056, P-value 0.000; β=-2.383, P-value 0.000; β=-1.721, P-value 0.000), but minimum GC-IPL was not (β=-1.056, P-value 0.115). None of the RNFL parameters were correlated with AL. This study establishes normative macular GC-IPL and RNFL thickness in children with refractive errors. Our results suggest that high definition optical coherence tomography RNFL parameters and minimum GC-IPL are not affected by AL or myopia in children, and therefore warrants further evaluation in pediatric glaucoma patients.

  9. Performance optimization of dense-array concentrator photovoltaic system considering effects of circumsolar radiation and slope error.

    PubMed

    Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui

    2015-07-27

    This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.

  10. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  11. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  12. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  13. Digital Mirror Device Application in Reduction of Wave-front Phase Errors

    PubMed Central

    Zhang, Yaping; Liu, Yan; Wang, Shuxue

    2009-01-01

    In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016

  14. Phase behavior of metastable liquid silicon at negative pressure: Ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Yu, Y. J.; Yan, J. L.; Ding, M. C.; Zhao, X. G.; Wang, H. Y.

    2016-04-01

    Extensive first-principle molecular dynamics simulations are performed to study the phase behavior of metastable liquid Si at negative pressure. Our results show that the high-density liquid (HDL) and HDL-vapor spinodals indeed form a continuous reentrant curve and the liquid-liquid critical point seems to just coincide with its minimum. The line of density maxima also has a strong tendency to pass through this minimum. The phase behaviour of metastable liquid Si therefore tends to be a critical-point-free scenario rather than a second-critical-point one based on SW potential.

  15. Optimization of traffic data collection for specific pavement design applications.

    DOT National Transportation Integrated Search

    2006-05-01

    The objective of this study is to establish the minimum traffic data collection effort required for pavement design applications satisfying a maximum acceptable error under a prescribed confidence level. The approach consists of simulating the traffi...

  16. A linear programming approach to characterizing norm bounded uncertainty from experimental data

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.; Bayard, D. S.; Yam, Y.

    1991-01-01

    The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).

  17. Application of Twin Beams in Mach-Zehnder Interferometer

    NASA Technical Reports Server (NTRS)

    Zhang, J. X.; Xie, C. D.; Peng, K. C.

    1996-01-01

    Using the twin beams generated from parametric amplifier to drive the two port of a Mach-Zehnder interferometer, it is shown that the minimum detectable optical phase shift can be largly reduced to the Heisenberg limit(1/n) which is far below the Shot Noise Limit(1/square root of n) the large gain limit. The dependence of the minimum detectable phase shift on parametric gain and the inefficient photodetectors has been discussed.

  18. The ultraviolet variations of iota Cas

    NASA Technical Reports Server (NTRS)

    Molnar, M. R.; Mallama, A. D.; Soskey, D. G.; Holm, A. V.

    1976-01-01

    The Ap variable star iota Cas was observed with the photometers on OAO-2 covering the spectral range 1430-4250 A. The ultraviolet light curves show a double wave with primary minimum and maximum at phase ? 0.00 and 0.35, respectively. Secondary minimum light is at phase ? 0.65 with secondary maximum at phase ? 0.85. The light curves longward of 3150 A vary in opposition to those shortward of this 'null region'. Ground-based coude spectra show that the Fe II and Cr II line strengths have a double-wave variation such that maximum strength occurs at minimum ultraviolet light. We suggest that the strong ultraviolet opacities due to photoionization and line blanketing by these metals may cause the observed photometric variations. We have also constructed an oblique-rotator model which shows iron and chromium lying in a great circle band rather than in circular spots.

  19. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  20. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  1. Magnetic-field sensing with quantum error detection under the effect of energy relaxation

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Yuichiro; Benjamin, Simon

    2017-03-01

    A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.

  2. Evaluation of quality indicators in a laboratory supporting tertiary cancer care facilities in India.

    PubMed

    Kumar, Savitha Anil; Jayanna, Prashanth; Prabhudesai, Shilpa; Kumar, Ajai

    2014-01-01

    To collect and tabulate errors and nonconformities in the preanalytical, analytical, and postanalytical process phases in a diagnostic clinical laboratory that supports a super-specialty cancer center in India, and identify areas of potential improvement in patient services. We collected data from our laboratory during a period of 24 months. Departments in the study included clinical biochemistry, hematology, clinical pathology, microbiology and serology, surgical pathology, and molecular pathology. We had initiated quality assessment based on international standards in our laboratory in 2010, with the aim of obtaining accreditation by national and international governing bodies. We followed the guidelines specified by International Organization for Standardization (ISO) 15189:2007 to identify noncompliant elements of our processes. Among a total of 144,030 specimens that our referral laboratory received during the 2-year period of our study, we uncovered an overall error rate for all 3 process phases of 1.23%; all of our error rates closely approximated the results from our peer institutions. Errors were most common in the preanalytical phase in both years of study; preanalytical- and postanalytical-phase errors constituted more than 90% of all errors. Further improvements are warranted in laboratory services and are contingent on adequate training and interdepartmental communication and cooperation. Copyright© by the American Society for Clinical Pathology (ASCP).

  3. Minimum probe length for unique identification of all open reading frames in a microbial genome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, B A; Ng, J; Fitch, J P

    2000-03-05

    In this paper, we determine the minimum hybridization probe length to uniquely identify at least 95% of the open reading frame (ORF) in an organism. We analyze the whole genome sequences of 17 species, 11 bacteria, 4 archaea, and 2 eukaryotes. We also present a mathematical model for minimum probe length based on assuming that all ORFs are random, of constant length, and contain an equal distribution of bases. The model accurately predicts the minimum probe length for all species, but it incorrectly predicts that all ORFs may be uniquely identified. However, a probe length of just 9 bases ismore » adequate to identify over 95% of the ORFs for all 15 prokaryotic species we studied. Using a minimum probe length, while accepting that some ORFs may not be identified and that data will be lost due to hybridization error, may result in significant savings in microarray and oligonucleotide probe design.« less

  4. Laboratory errors and patient safety.

    PubMed

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.

  5. Performance of traffic-alert collision avoidance (TCAS) antennas in the presence of scatterers

    NASA Technical Reports Server (NTRS)

    Sampath, K. S.; Rojas, R. G.; Burnside, W. D.

    1993-01-01

    The performance of two TCAS systems is studied in the presence of electromagnetic scatterers. TCAS is an aircraft mounted angle of arrival (AOA) system, which estimates the bearing of a signal transmitted from a mode-S transponder on another nearby aircraft (intruder). Two systems are studied: (1) Comparison of Relative Amplitude system (CRA), and (2) Spiral Phase Antenna (SPA). The CRA antenna receives the reply via four switched beams. The bearing is estimated by comparing the amplitudes of the received signal. The SPA is based on the phase interferometer, which utilizes the received phase via sum and difference beams. The AOA is computed by comparing the reply with similar values on a calibration table, which is generated by modeling the TCAS antenna on the bare fuselage of a Boeing 727-200. The antenna patterns for the TCAS are found via high frequency methods based on the Uniform Geometric theory of Diffraction (UTD). By minimizing the standard deviation of the bearing error in a specified angular sector, optimal locations for top and bottom mounted TCAS antennas are found on the Boeing 727-200, 737-300 and 747-200 airframes. It will be shown that the overall bearing errors of the amplitude system are consistently smaller than the spiral phase TCAS. The effect of two types of nearby scatterers--antennas, and engine inlets--is studied. The AT741 L-band blade, DMC60-1 VHF Communication antenna were chosen as being representative antenna interference examples. Models are derived for the blades via a moment method analysis followed by a least squares procedure to synthesize the scattering patterns. Studies were conducted to estimate the minimum separation between the two antennas for acceptable operation. It will be shown that the spiral phase TCAS is adversely affected by the presence of a blade antenna. The amplitude system does not suffer from this limitation, especially for the forward look angles which are of most interest here. A model to represent the inlet scattering is based on the multiple scattering method and UTD. The engine on top of the B727-200 fuselage is modeled by a terminated circular waveguide. Then, the effect of moving the antenna forward on the fuselage is studied. It is again shown that the performance of the amplitude system is superior.

  6. Performance of Traffic-Alert Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Sampath, Krishna Sampath

    The performance of two TCAS systems is studied in the presence of electromagnetic scatterers. TCAS is an aircraft mounted angle of arrival (AOA) system, which estimates the bearing of a signal transmitted from a mode -S transponder on another nearby aircraft (intruder). Two systems are studied: (i) Comparison of Relative Amplitude system (CRA) and (ii) Spiral Phase Antenna (SPA). The CRA antenna receives the reply via four switched beams. The bearing is estimated by comparing the amplitudes of the received signal. The SPA is based on the phase interferometer, which utilizes the received phase via sum and difference beams. The AOA is computed by comparing the reply with similar values on a calibration table, which is generated by modeling the TCAS antenna on the bare fuselage of a Boeing 727-200. The antenna patterns for the TCAS are found via high frequency methods based on the Uniform Geometric Theory of Diffraction (UTD). By minimizing the standard deviation of the bearing error in a specified angular sector, optimal locations for top and bottom mounted TCAS antennas are found on the Boeing 727-200, 737-300 and 747-200 airframes. It will be shown that the overall bearing errors of the amplitude system are consistently smaller than the spiral phase TCAS. The effect of two types of nearby scatterers- -antennas, and engine inlets--is studied. The AT741 L-band blade, DMC60-1 VHF Communication antenna were chosen as being representative antenna interference examples. Models are derived for the blades via a moment method analysis followed by a least squares procedure to synthesize the scattering patterns. Studies were conducted to estimate the minimum separation between the two antennas for acceptable operation. It will be shown that the spiral phase TCAS is adversely affected by the presence of a blade antenna. The amplitude system does not suffer from this limitation, especially for the forward look angles which are of most interest here. A model to represent the inlet scattering is based on the multiple scattering method and UTD. The engine on top of the B727-200 fuselage is modeled by a terminated circular waveguide. Then, the effect of moving the antenna forward on the fuselage is studied. It is again shown that the performance of the amplitude system is superior.

  7. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  8. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  9. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  10. Alternating phase-shifting masks: phase determination and impact of quartz defects--theoretical and experimental results

    NASA Astrophysics Data System (ADS)

    Griesinger, Uwe A.; Dettmann, Wolfgang; Hennig, Mario; Heumann, Jan P.; Koehle, Roderick; Ludwig, Ralf; Verbeek, Martin; Zarrabian, Mardjan

    2002-07-01

    In optical lithography balancing the aerial image of an alternating phase shifting mask (alt. PSM) is a major challenge. For the exposure wavelengths (currently 248nm and 193nm) an optimum etching method is necessary to overcome imbalance effects. Defects play an important role in the imbalances of the aerial image. In this contribution defects will be discussed by using the methodology of global phase imbalance control also for local imbalances which are a result of quartz defects. The effective phase error can be determined with an AIMS-system by measuring the CD width between the images of deep- and shallow trenches at different focus settings. The AIMS results are analyzed in comparison to the simulated and lithographic print results of the alternating structures. For the analysis of local aerial image imbalances it is necessary to investigate the capability of detecting these phase defects with state of the art inspection systems. Alternating PSMs containing programmed defects were inspected with different algorithms to investigate the capture rate of special phase defects in dependence on the defect size. Besides inspection also repair of phase defects is an important task. In this contribution we show the effect of repair on the optical behavior of phase defects. Due to the limited accuracy of the repair tools the repaired area still shows a certain local phase error. This error can be caused either by residual quartz material or a substrate damage. The influence of such repair induced phase errors on the aerial image were investigated.

  11. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    NASA Astrophysics Data System (ADS)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  12. AQMEII3 evaluation of regional NA/EU simulations and ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac

  13. EEG oscillatory patterns are associated with error prediction during music performance and are altered in musician's dystonia.

    PubMed

    Ruiz, María Herrojo; Strübing, Felix; Jabusch, Hans-Christian; Altenmüller, Eckart

    2011-04-15

    Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia-thalamic-frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13-15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6-8 Hz) and correlated with the severity of the disorder. The present findings shed new light on the neural mechanisms, which might implement motor prediction by means of forward control processes, as they function in healthy pianists and in their altered form in patients with MD. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Resolving Mixed Algal Species in Hyperspectral Images

    PubMed Central

    Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.

    2014-01-01

    We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451

  15. Multimodal pressure-flow method to assess dynamics of cerebral autoregulation in stroke and hypertension

    PubMed Central

    Novak, Vera; Yang, Albert CC; Lepicovsky, Lukas; Goldberger, Ary L; Lipsitz, Lewis A; Peng, Chung-Kang

    2004-01-01

    Background This study evaluated the effects of stroke on regulation of cerebral blood flow in response to fluctuations in systemic blood pressure (BP). The autoregulatory dynamics are difficult to assess because of the nonstationarity and nonlinearity of the component signals. Methods We studied 15 normotensive, 20 hypertensive and 15 minor stroke subjects (48.0 ± 1.3 years). BP and blood flow velocities (BFV) from middle cerebral arteries (MCA) were measured during the Valsalva maneuver (VM) using transcranial Doppler ultrasound. Results A new technique, multimodal pressure-flow analysis (MMPF), was implemented to analyze these short, nonstationary signals. MMPF analysis decomposes complex BP and BFV signals into multiple empirical modes, representing their instantaneous frequency-amplitude modulation. The empirical mode corresponding to the VM BP profile was used to construct the continuous phase diagram and to identify the minimum and maximum values from the residual BP (BPR) and BFV (BFVR) signals. The BP-BFV phase shift was calculated as the difference between the phase corresponding to the BPR and BFVR minimum (maximum) values. BP-BFV phase shifts were significantly different between groups. In the normotensive group, the BFVR minimum and maximum preceded the BPR minimum and maximum, respectively, leading to large positive values of BP-BFV shifts. Conclusion In the stroke and hypertensive groups, the resulting BP-BFV phase shift was significantly smaller compared to the normotensive group. A standard autoregulation index did not differentiate the groups. The MMPF method enables evaluation of autoregulatory dynamics based on instantaneous BP-BFV phase analysis. Regulation of BP-BFV dynamics is altered with hypertension and after stroke, rendering blood flow dependent on blood pressure. PMID:15504235

  16. Purification of Logic-Qubit Entanglement

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network. PMID:27377165

  17. Rapid Measurement and Correction of Phase Errors from B0 Eddy Currents: Impact on Image Quality for Non-Cartesian Imaging

    PubMed Central

    Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.

    2014-01-01

    Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532

  18. Fringe-period selection for a multifrequency fringe-projection phase unwrapping method

    NASA Astrophysics Data System (ADS)

    Zhang, Chunwei; Zhao, Hong; Jiang, Kejian

    2016-08-01

    The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.

  19. VizieR Online Data Catalog: V and R CCD photometry of visual binaries (Abad+, 2004)

    NASA Astrophysics Data System (ADS)

    Abad, C.; Docobo, J. A.; Lanchares, V.; Lahulla, J. F.; Abelleira, P.; Blanco, J.; Alvarez, C.

    2003-11-01

    Table 1 gives relevant data for the visual binaries observed. Observations were carried out over a short period of time, therefore we assign the mean epoch (1998.58) for the totality of data. Data of individual stars are presented as average data with errors, by parameter, when various observations have been calculated, as well as the number of observations involved. Errors corresponding to astrometric relative positions between components are always present. For single observations, parameter fitting errors, specially for dx and dy parameters, have been calculated analysing the chi2 test around the minimum. Following the rules for error propagation, theta and rho errors can be estimated. Then, Table 1 shows single observation errors with an additional significant digit. When a star does not have known references, we include it in Table 2, where J2000 position and magnitudes are from the USNO-A2.0 catalogue (Monet et al., 1998, Cat. ). (2 data files).

  20. Compact Low-Loss Planar Magic-T

    NASA Technical Reports Server (NTRS)

    U-yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Moseley, Sameul H.

    2008-01-01

    This design allows broadband power combining with high isolation between the H port and E port, and achieves a lower insertion loss than any other broadband planar magic-T. Passive micro wave/millimeter-wave signal power is combined both in-phase and out-of-phase at the ports, with the phase error being less than 1 , which is limited by port impedance. The in-phase signal combiner consists of two quarter-wavelength-long transmission lines combined at the microstrip line junction. The out-of-phase signal combiner consists of two half-wavelength-long transmission lines combined in series. Structural symmetry creates a virtual ground plane at the combining junction, and the combined signal is converted from microstrip line to slotline. Optimum realizable characteristic impedances are used so that the magic-T provides broadband response with low return loss. The magic-T is used in microwave and millimeter-wave frequencies, with the operating bandwidth being approximately 100 percent. The minimum isolation obtainable is 32 dB from port E to port H. The magic-T VSWR is less than 1.1 in the operating band. Operating temperature is mainly dependent on the variation in the dielectric constant of the substrate. Using crystallized substrate, the invention can operate in an extremely broad range of temperatures (from 0 to 400 K). It has a very high reliability because it has no moving parts and requires no maintenance, though it is desirable that the magic-T operate in a low-humidity environment. Fabrication of this design is very simple, using only two metallized layers. No bond wires, via holes, or air bridges are required. Additionally, this magic-T can operate as an individual component without auxiliary components.

  1. The efficacy of three objective systems for identifying beef cuts that can be guaranteed tender.

    PubMed

    Wheeler, T L; Vote, D; Leheska, J M; Shackelford, S D; Belk, K E; Wulf, D M; Gwartney, B L; Koohmaraie, M

    2002-12-01

    The objective of this study was to determine the accuracy of three objective systems (prototype BeefCam, colorimeter, and slice shear force) for identifying guaranteed tender beef. In Phase I, 308 carcasses (105 Top Choice, 101 Low Choice, and 102 Select) from two commercial plants were tested. In Phase II, 400 carcasses (200 rolled USDA Select and 200 rolled USDA Choice) from one commercial plant were tested. The three systems were evaluated based on progressive certification of the longissimus as "tender" in 10% increments (the best 10, 20, 30%, etc., certified as "tender" by each technology; 100% certification would mean no sorting for tenderness). In Phase I, the error (percentage of carcasses certified as tender that had Warner-Bratzler shear force of > or = 5 kg at 14 d postmortem) for 100% certification using all carcasses was 14.1%. All certification levels up to 80% (slice shear force) and up to 70% (colorimeter) had less error (P < 0.05) than 100% certification. Errors in all levels of certification by prototype BeefCam (13.8 to 9.7%) were not different (P > 0.05) from 100% certification. In Phase I, the error for 100% certification for USDA Select carcasses was 30.7%. For Select carcasses, all slice shear force certification levels up to 60% (0 to 14.8%) had less error (P < 0.05) than 100% certification. For Select carcasses, errors in all levels of certification by colorimeter (20.0 to 29.6%) and by BeefCam (27.5 to 31.4%) were not different (P > 0.05) from 100% certification. In Phase II, the error for 100% certification for all carcasses was 9.3%. For all levels of slice shear force certification less than 90% (for all carcasses) or less than 80% (Select carcasses), errors in tenderness certification were less than (P < 0.05) for 100% certification. In Phase II, for all carcasses or Select carcasses, colorimeter and prototype BeefCam certifications did not significantly reduce errors (P > 0.05) compared to 100% certification. Thus, the direct measure of tenderness provided by slice shear force results in more accurate identification of "tender" beef carcasses than either of the indirect technologies, prototype BeefCam, or colorimeter, particularly for USDA Select carcasses. As tested in this study, slice shear force, but not the prototype BeefCam or colorimeter systems, accurately identified "tender" beef.

  2. Practicality of quantum information processing

    NASA Astrophysics Data System (ADS)

    Lau, Hoi-Kwan

    Quantum Information Processing (QIP) is expected to bring revolutionary enhancement to various technological areas. However, today's QIP applications are far from being practical. The problem involves both hardware issues, i.e., quantum devices are imperfect, and software issues, i.e., the functionality of some QIP applications is not fully understood. Aiming to improve the practicality of QIP, in my PhD research I have studied various topics in quantum cryptography and ion trap quantum computation. In quantum cryptography, I first studied the security of position-based quantum cryptography (PBQC). I discovered a wrong assumption in the previous literature that the cheaters are not allowed to share entangled resources. I proposed entanglement attacks that could cheat all known PBQC protocols. I also studied the practicality of continuous-variable (CV) quantum secret sharing (QSS). While the security of CV QSS was considered by the literature only in the limit of infinite squeezing, I found that finitely squeezed CV resources could also provide finite secret sharing rate. Our work relaxes the stringent resources requirement of implementing QSS. In ion trap quantum computation, I studied the phase error of quantum information induced by dc Stark effect during ion transportation. I found an optimized ion trajectory for which the phase error is the minimum. I also defined a threshold speed, above which ion transportation would induce significant error. In addition, I proposed a new application for ion trap systems as universal bosonic simulators (UBS). I introduced two architectures, and discussed their respective strength and weakness. I illustrated the implementations of bosonic state initialization, transformation, and measurement by applying radiation fields or by varying the trap potential. When comparing with conducting optical experiments, the ion trap UBS is advantageous in higher state initialization efficiency and higher measurement accuracy. Finally, I proposed a new method to re-cool ion qubits during quantum computation. The idea is to transfer the motional excitation of a qubit to another ion that is prepared in the motional ground state. I showed that my method could be ten times faster than current laser cooling techniques, and thus could improve the speed of ion trap quantum computation.

  3. Deformation Estimation In Non-Urban Areas Exploiting High Resolution SAR Data

    NASA Astrophysics Data System (ADS)

    Goel, Kanika; Adam, Nico

    2012-01-01

    Advanced techniques such as the Small Baseline Subset Algorithm (SBAS) have been developed for terrain motion mapping in non-urban areas with a focus on extracting information from distributed scatterers (DSs). SBAS uses small baseline differential interferograms (to limit the effects of geometric decorrelation) and these are typically multilooked to reduce phase noise, resulting in loss of resolution. Various error sources e.g. phase unwrapping errors, topographic errors, temporal decorrelation and atmospheric effects also affect the interferometric phase. The aim of our work is an improved deformation monitoring in non-urban areas exploiting high resolution SAR data. The paper provides technical details and a processing example of a newly developed technique which incorporates an adaptive spatial phase filtering algorithm for an accurate high resolution differential interferometric stacking, followed by deformation retrieval via the SBAS approach where we perform the phase inversion using a more robust L1 norm minimization.

  4. Postfabrication Phase Error Correction of Silicon Photonic Circuits by Single Femtosecond Laser Pulses

    DOE PAGES

    Bachman, Daniel; Chen, Zhijiang; Wang, Christopher; ...

    2016-11-29

    Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less

  5. Changes in the relationship NAO-Northern Hemisphere Temperature due to solar activity

    NASA Astrophysics Data System (ADS)

    de La Torre, L.; Gimeno, L.; Añel, J. A.; Nieto, R.; Tesouro, M.; Ribera, P.; García, R.; Hernández, E.

    2003-04-01

    The influence of the North Atlantic Oscillation (NAO) on wintertime Northern Hemisphere Temperature (NHT) is investigated. To check the hypothesis that the solar cycle is modulating this relationship, the sample was divided into two groups, one included the years corresponding to the three consecutive lowest values of sunspots number for every 11-years cycle (43 years) and the other the ones corresponding to the three consecutive highest numbers (39 years) for every 11-years cycle. If the data of each year were independent, the correlation coefficients between NAO index and NHT for 43 (39) years would be 0.30 (0.32) at 95% confidence level. Correlation index corresponding to the solar minimum phases was -0.17 and to the solar maximum phases was 0.35. The second result is statistically significant and indicates that there are periods when a positive phase of the NAO is related to positive anomalies of NHT- result that supports our current idea of the influence of the NAO on temperature- but there are other periods when NAO and NHT are not correlated. So, results suggest that this relationship has different sign according to the phase of the solar cycle. For solar maximum phases NAO and NHT are positively correlated -result assumed up to the moment- but for solar minimum phases correlations are not significant or even negative. This result is in agreement with the different extension of the NAO for solar cycle phases [1] - almost hemispheric for maximum phases and confined to the eastern Atlantic for minimum phases-.

  6. Encrypted holographic data storage based on orthogonal-phase-code multiplexing.

    PubMed

    Heanue, J F; Bashaw, M C; Hesselink, L

    1995-09-10

    We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.

  7. Signal location using generalized linear constraints

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.; Feldman, D. D.

    1992-01-01

    This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.

  8. Performance analysis of adaptive equalization for coherent acoustic communications in the time-varying ocean environment.

    PubMed

    Preisig, James C

    2005-07-01

    Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

  9. Analysis of Choice Stepping with Visual Interference Can Detect Prolonged Postural Preparation in Older Adults with Mild Cognitive Impairment at High Risk of Falling.

    PubMed

    Uemura, Kazuki; Hasegawa, Takashi; Tougou, Hiroki; Shuhei, Takahashi; Uchiyama, Yasushi

    2015-01-01

    We aimed to clarify postural control deficits in older adults with mild cognitive impairment (MCI) at high risk of falling by addressing the inhibitory process. This study involved 376 community-dwelling older adults with MCI. Participants were instructed to execute forward stepping on the side indicated by the central arrow while ignoring the 2 flanking arrows on each side (→→→→→, congruent, or →→←→→, incongruent). Initial weight transfer direction errors [anticipatory postural adjustment (APA) errors], step execution times, and divided phases (reaction, APA, and swing phases) were measured from vertical force data. Participants were categorized as fallers (n = 37) and non-fallers (n = 339) based on fall experiences in the last 12 months. There were no differences in the step execution times, swing phases, step error rates, and APA error rates between groups, but fallers had a significantly longer APA phase relative to non-fallers in trials of the incongruent condition with APA errors (p = 0.005). Fallers also had a longer reaction phase in trials with the correct APA, regardless of the condition (p = 0.01). Analyses of choice stepping with visual interference can detect prolonged postural preparation as a specific falling-associated deficit in older adults with MCI. © 2015 S. Karger AG, Basel.

  10. Impact of SST Anomaly Events over the Kuroshio-Oyashio Extension on the "Summer Prediction Barrier"

    NASA Astrophysics Data System (ADS)

    Wu, Yujie; Duan, Wansuo

    2018-04-01

    The "summer prediction barrier" (SPB) of SST anomalies (SSTA) over the Kuroshio-Oyashio Extension (KOE) refers to the phenomenon that prediction errors of KOE-SSTA tend to increase rapidly during boreal summer, resulting in large prediction uncertainties. The fast error growth associated with the SPB occurs in the mature-to-decaying transition phase, which is usually during the August-September-October (ASO) season, of the KOE-SSTA events to be predicted. Thus, the role of KOE-SSTA evolutionary characteristics in the transition phase in inducing the SPB is explored by performing perfect model predictability experiments in a coupled model, indicating that the SSTA events with larger mature-to-decaying transition rates (Category-1) favor a greater possibility of yielding a more significant SPB than those events with smaller transition rates (Category-2). The KOE-SSTA events in Category-1 tend to have more significant anomalous Ekman pumping in their transition phase, resulting in larger prediction errors of vertical oceanic temperature advection associated with the SSTA events. Consequently, Category-1 events possess faster error growth and larger prediction errors. In addition, the anomalous Ekman upwelling (downwelling) in the ASO season also causes SSTA cooling (warming), accelerating the transition rates of warm (cold) KOE-SSTA events. Therefore, the SSTA transition rate and error growth rate are both related with the anomalous Ekman pumping of the SSTA events to be predicted in their transition phase. This may explain why the SSTA events transferring more rapidly from the mature to decaying phase tend to have a greater possibility of yielding a more significant SPB.

  11. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  12. Recognizing and Reducing Analytical Errors and Sources of Variation in Clinical Pathology Data in Safety Assessment Studies.

    PubMed

    Schultze, A E; Irizarry, A R

    2017-02-01

    Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.

  13. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    PubMed

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    PubMed Central

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886

  15. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  16. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  17. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  18. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  19. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  20. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  1. Image reduction pipeline for the detection of variable sources in highly crowded fields

    NASA Astrophysics Data System (ADS)

    Gössl, C. A.; Riffeser, A.

    2002-01-01

    We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.

  2. Cost effectiveness of the US Geological Survey's stream-gaging program in New York

    USGS Publications Warehouse

    Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.

    1986-01-01

    The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)

  3. Design of an all-attitude flight control system to execute commanded bank angles and angles of attack

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Eggleston, D. M.

    1976-01-01

    A flight control system for use in air-to-air combat simulation was designed. The input to the flight control system are commanded bank angle and angle of attack, the output are commands to the control surface actuators such that the commanded values will be achieved in near minimum time and sideslip is controlled to remain small. For the longitudinal direction, a conventional linear control system with gains scheduled as a function of dynamic pressure is employed. For the lateral direction, a novel control system, consisting of a linear portion for small bank angle errors and a bang-bang control system for large errors and error rates is employed.

  4. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  5. Sensitivity, optimal scaling and minimum roundoff errors in flexible structure models

    NASA Technical Reports Server (NTRS)

    Skelton, Robert E.

    1987-01-01

    Traditional modeling notions presume the existence of a truth model that relates the input to the output, without advanced knowledge of the input. This has led to the evolution of education and research approaches (including the available control and robustness theories) that treat the modeling and control design as separate problems. The paper explores the subtleties of this presumption that the modeling and control problems are separable. A detailed study of the nature of modeling errors is useful to gain insight into the limitations of traditional control and identification points of view. Modeling errors need not be small but simply appropriate for control design. Furthermore, the modeling and control design processes are inevitably iterative in nature.

  6. Quality Measures in Pre-Analytical Phase of Tissue Processing: Understanding Its Value in Histopathology.

    PubMed

    Rao, Shalinee; Masilamani, Suresh; Sundaram, Sandhya; Duvuru, Prathiba; Swaminathan, Rajendiran

    2016-01-01

    Quality monitoring in histopathology unit is categorized into three phases, pre-analytical, analytical and post-analytical, to cover various steps in the entire test cycle. Review of literature on quality evaluation studies pertaining to histopathology revealed that earlier reports were mainly focused on analytical aspects with limited studies on assessment of pre-analytical phase. Pre-analytical phase encompasses several processing steps and handling of specimen/sample by multiple individuals, thus allowing enough scope for errors. Due to its critical nature and limited studies in the past to assess quality in pre-analytical phase, it deserves more attention. This study was undertaken to analyse and assess the quality parameters in pre-analytical phase in a histopathology laboratory. This was a retrospective study done on pre-analytical parameters in histopathology laboratory of a tertiary care centre on 18,626 tissue specimens received in 34 months. Registers and records were checked for efficiency and errors for pre-analytical quality variables: specimen identification, specimen in appropriate fixatives, lost specimens, daily internal quality control performance on staining, performance in inter-laboratory quality assessment program {External quality assurance program (EQAS)} and evaluation of internal non-conformities (NC) for other errors. The study revealed incorrect specimen labelling in 0.04%, 0.01% and 0.01% in 2007, 2008 and 2009 respectively. About 0.04%, 0.07% and 0.18% specimens were not sent in fixatives in 2007, 2008 and 2009 respectively. There was no incidence of specimen lost. A total of 113 non-conformities were identified out of which 92.9% belonged to the pre-analytical phase. The predominant NC (any deviation from normal standard which may generate an error and result in compromising with quality standards) identified was wrong labelling of slides. Performance in EQAS for pre-analytical phase was satisfactory in 6 of 9 cycles. A low incidence of errors in pre-analytical phase implies that a satisfactory level of quality standards was being practiced with still scope for improvement.

  7. Combined Henyey-Greenstein and Rayleigh phase function.

    PubMed

    Liu, Quanhua; Weng, Fuzhong

    2006-10-01

    The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.

  8. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    PubMed Central

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. PMID:28522969

  9. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  10. Q-adjusting technique applied to vertical deflections estimation in a single-axis rotation INS/GPS integrated system

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao

    2016-10-01

    Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.

  11. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  12. [Spectrum simulation based on data derived from red tide].

    PubMed

    Liu, Zhen-Yu; Cui, Ting-Wei; Yue, Jie; Jiang, Tao; Cao, Wen-Xi; Ma, Yi

    2011-11-01

    The present paper utilizes the absorption data of red tide water measured during the growing and dying course to retrieve imaginary part of the index of refraction based on Mie theory, carries out the simulation and analysis of average absorption efficiency factors, average backscattering efficiency factors and scattering phase function. The analysis of the simulation shows that Mie theory can be used to reproduce the absorption property of Chaetoceros socialis with an average error of 11%; the average backscattering efficiency factors depend on the value of absorption whose maximum value corresponds to the wavelength range from 400 to 700 nanometer; the average backscattering efficiency factors showed a maximum value on 17th with a low value during the outbreak of red tide and the minimum on 21th; the total scattering, weakly depending on the absorption, is proportional to the size parameters which represent the relative size of cell diameter with respect to the wavelength, while the angle scattering intensity is inversely proportional to wavelength.

  13. Design of adaptive control systems by means of self-adjusting transversal filters

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.

    1986-01-01

    The design of closed-loop adaptive control systems based on nonparametric identification was addressed. Implementation is by self-adjusting Least Mean Square (LMS) transversal filters. The design concept is Model Reference Adaptive Control (MRAC). Major issues are to preserve the linearity of the error equations of each LMS filter, and to prevent estimation bias that is due to process or measurement noise, thus providing necessary conditions for the convergence and stability of the control system. The controlled element is assumed to be asymptotically stable and minimum phase. Because of the nonparametric Finite Impulse Response (FIR) estimates provided by the LMS filters, a-priori information on the plant model is needed only in broad terms. Following a survey of control system configurations and filter design considerations, system implementation is shown here in Single Input Single Output (SISO) format which is readily extendable to multivariable forms. In extensive computer simulation studies the controlled element is represented by a second-order system with widely varying damping, natural frequency, and relative degree.

  14. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  15. Application of artificial neural networks and genetic algorithms to modeling molecular electronic spectra in solution

    NASA Astrophysics Data System (ADS)

    Lilichenko, Mark; Kelley, Anne Myers

    2001-04-01

    A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.

  16. UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers

    NASA Technical Reports Server (NTRS)

    Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2017-01-01

    This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Yang, F

    Purpose: Knowing MLC leaf positioning error over the course of treatment would be valuable for treatment planning, QA design, and patient safety. The objective of the current study was to quantify the MLC positioning accuracy for VMAT delivery of head and neck treatment plans. Methods: A total of 837 MLC log files were collected from 14 head and neck cancer patients undergoing full arc VMAT treatment on one Varian Trilogy machine. The actual and planned leaf gaps were extracted from the retrieved MLC log files. For a given patient, the leaf gap error percentage (LGEP), defined as the ratio ofmore » the actual leaf gap over the planned, was evaluated for each leaf pair at all the gantry angles recorded over the course of the treatment. Statistics describing the distribution of the largest LGEP (LLGEP) of the 60 leaf pairs including the maximum, minimum, mean, Kurtosis, and skewness were evaluated. Results: For the 14 studied patients, their PTV located at tonsil, base of tongue, larynx, supraglottis, nasal cavity, and thyroid gland with volume ranging from 72.0 cm{sup 3} to 602.0 cm{sup 3}. The identified LLGEP differed between patients. It ranged from 183.9% to 457.7% with a mean of 368.6%. For the majority of the patients, the LLGEP distributions peaked at non-zero positions and showed no obvious dependence on gantry rotations. Kurtosis and skewness, with minimum/maximum of 66.6/217.9 and 6.5/12.6, respectively, suggested relatively more peaked while right-skewed leaf error distribution pattern. Conclusion: The results indicate pattern of MLC leaf gap error differs between patients of lesion located at similar anatomic site. Understanding the systemic mechanisms underlying these observed error patterns necessitates examining more patient-specific plan parameters in a large patient cohort setting.« less

  18. A method to map errors in the deformable registration of 4DCT images1

    PubMed Central

    Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.

    2010-01-01

    Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288

  19. Optical signal monitoring in phase modulated optical fiber transmission systems

    NASA Astrophysics Data System (ADS)

    Zhao, Jian

    Optical performance monitoring (OPM) is one of the essential functions for future high speed optical networks. Among the parameters to be monitored, chromatic dispersion (CD) is especially important since it has a significant impact on overall system performance. In this thesis effective CD monitoring approaches for phase-shift keying (PSK) based optical transmission systems are investigated. A number of monitoring schemes based on radio frequency (RF) spectrum analysis and delay-tap sampling are proposed and their performance evaluated. A method for dispersion monitoring of differential phase-shift keying (DPSK) signals based on RF power detection is studied. The RF power spectrum is found to increase with the increase of CD and decrease with polarization mode dispersion (PMD). The spectral power density dependence on CD is studied theoretically and then verified through simulations and experiments. The monitoring sensitivity for nonreturn-to-zero differential phase-shift keying (NRZ-DPSK) and return-to-zero differential phase-shift keying (RZ-DPSK) based systems can reach 80ps/nm/dB and 34ps/nm/dB respectively. The scheme enables the monitoring of differential group delay (DGD) and CD simultaneously. The monitoring sensitivity of CD and DGD can reach 56.7ps/nm/dB and 3.1ps/dB using a bandpass filter. The effects of optical signal-to-noise ratio (OSNR), DGD, fiber nonlinearity and chirp on the monitoring results are investigated. Two RF pilot tones are employed for CD monitoring of DPSK signals. Specially selected pilot tone frequencies enable good monitoring sensitivity with minimum influence on the received signals. The dynamic range exceeding 35dB and monitoring sensitivity up to 9.5ps/nm/dB are achieved. Asynchronous sampling technique is employed for CD monitoring. A signed CD monitoring method for 10Gb/s NRZ-DPSK and RZ-DPSK systems using asynchronous delay-tap sampling technique is studied. The demodulated signals suffer asymmetric waveform distortion if there is a phase error (Deltaphi) in the delay interferometer (DI) and in the presence of residual CD. Using delay-tap sampling the scatter plots can reflect this signal distortion through their asymmetric characteristics. A distance ratio (DR) is defined to represent the change of the scatter plots which is directly related to the accumulated CD. The monitoring range can be up to +/-400ps/nm and to +/-720ps/nm for 10Gb/s NRZ-DPSK and RZ-DPSK signals with 450 phase error in DI. The monitoring sensitivity reaches +/-8ps/nm and CD polarity discrimination is realized. It is found that the signal degradation is related to the increment of the absolute value of CD or phase mismatch. The effect of different polarities of phase error on CD monitoring is also analyzed. The shoulders location depends on the sign of the product DLDeltaphi. If DLDeltaphi > 0, the shoulder will appear on trailing edge else the shoulder will appear on leading edge when DLDeltaphi < 0. The analysis shows that the phase error is identical to the frequency offset of optical source so a signed frequency offset monitoring is also demonstrated. The monitoring results show that the monitoring range can reach +/-2.2GHz and the monitoring sensitivity is around 27MHz. The effect of nonlinearity, OSNR and bandwidth of the lowpass filter on the proposed monitoring method has also been studied. The signed CD monitoring for 100Gb/s carrier suppressed return-to-zero differential quadrature phase-shift keying (CSRZ-DQPSK) system based on the delay-tap sampling technology is demonstrated. The monitoring range and monitoring resolution can goes up to +/-32ps/nm and +/-8ps/nm, respectively. A signed CD and optical carrier wavelength monitoring scheme using cross-correlation method for on-off keying (00K) wavelength division multiplexing (WDM) system is proposed and demonstrated. CD monitoring sensitivity is high and can be less than 10% of the bit period. Wavelength monitoring is implemented using the proposed approach. The monitoring results show that the sensitivity can reach up to 1.37ps/GHz.

  20. Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.

    PubMed

    Carraro, Paolo; Zago, Tatiana; Plebani, Mario

    2012-03-01

    Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.

  1. Complexity of possibly gapped histogram and analysis of histogram.

    PubMed

    Fushing, Hsieh; Roy, Tania

    2018-02-01

    We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.

  2. Complexity of possibly gapped histogram and analysis of histogram

    PubMed Central

    Roy, Tania

    2018-01-01

    We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT. PMID:29515829

  3. Kramers-Kronig receiver operable without digital upsampling.

    PubMed

    Bo, Tianwai; Kim, Hoon

    2018-05-28

    The Kramers-Kronig (KK) receiver is capable of retrieving the phase information of optical single-sideband (SSB) signal from the optical intensity when the optical signal satisfies the minimum phase condition. Thus, it is possible to direct-detect the optical SSB signal without suffering from the signal-signal beat interference and linear transmission impairments. However, due to the spectral broadening induced by nonlinear operations in the conventional KK algorithm, it is necessary to employ the digital upsampling at the beginning of the digital signal processing (DSP). The increased number of samples at the DSP would hinder the real-time implementation of this attractive receiver. Hence, we propose a new DSP algorithm for KK receiver operable at 2 samples per symbol. We adopt a couple of mathematical approximations to avoid the use of nonlinear operations such as logarithm and exponential functions. By using the proposed algorithm, we demonstrate the transmission of 112-Gb/s SSB orthogonal frequency-division-multiplexed signal over an 80-km fiber link. The results show that the proposed algorithm operating at 2 samples per symbol exhibits similar performance to the conventional KK one operating at 6 samples per symbol. We also present the error analysis of the proposed algorithm for KK receiver in comparison with the conventional one.

  4. Complexity of possibly gapped histogram and analysis of histogram

    NASA Astrophysics Data System (ADS)

    Fushing, Hsieh; Roy, Tania

    2018-02-01

    We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.

  5. Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.

    1992-01-01

    An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.

  6. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  7. Basic Studies on High Pressure Air Plasmas

    DTIC Science & Technology

    2006-08-30

    which must be added a 1.5 month salary to A. Bugayev for assistance in laser and optic techniques. 2 Part II Technical report Plasma-induced phase shift...two-wavelength heterodyne interferometry applied to atmospheric pressure air plasma 11.1 .A. Plasma-induced phase shift - Electron density...a driver, since the error on the frequency leads to an error on the phase shift. (c) Optical elements Mirrors Protected mirrors must be used to stand

  8. An approach to the analysis of performance of quasi-optimum digital phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1973-01-01

    An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.

  9. Cost effectiveness of the stream-gaging program in Ohio

    USGS Publications Warehouse

    Shindel, H.L.; Bartlett, W.P.

    1986-01-01

    This report documents the results of the cost effectiveness of the stream-gaging program in Ohio. Data uses and funding sources were identified for 107 continuous stream gages currently being operated by the U.S. Geological Survey in Ohio with a budget of $682,000; this budget includes field work for other projects and excludes stations jointly operated with the Miami Conservancy District. No stream gage were identified as having insufficient reason to continue their operation; nor were any station identified as having uses specifically only for short-term studies. All 107 station should be maintained in the program for the foreseeable future. The average standard error of estimation of stream flow records is 29.2 percent at its present level of funding. A minimum budget of $679,000 is required to operate the 107-gage program; a budget less than this does no permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 31.1 percent The maximum budget analyzed was $1,282,000, which resulted in an average standard error of 11.1 percent. A need for additional gages has been identified by the other agencies that cooperate in the program. It is suggested that these gage be installed as funds can be made available.

  10. Effect of ephemeris errors on the accuracy of the computation of the tangent point altitude of a solar scanning ray as measured by the SAGE 1 and 2 instruments

    NASA Technical Reports Server (NTRS)

    Buglia, James J.

    1989-01-01

    An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.

  11. Evaluation of a pulmonary strain model by registration of dynamic CT scans

    NASA Astrophysics Data System (ADS)

    Pomeroy, Marc; Liang, Zhengrong; Brehm, Anthony

    2017-03-01

    Idiopathic pulmonary fibrosis (IPF) is a chronic fibrotic lung disease that develops in adults without any known cause. It is an interstitial lung disease in which the lung tissue becomes scarred and stiffens, ultimately leading to respiratory failure. This disease currently has no cure with limited treatment options, leading to an average survival time of 3-5 years after diagnosis. In this paper we employ a mathematical model simulating the lung parenchyma as hexagons with elastic forces applied to connecting vertices and opposing vertices. Using an image registration algorithm, we obtain trajectories of 4D-CT scans of a healthy patient, and one suffering from IPF. Converting the image trajectories into a hexagonal lattice, we fit the model parameters to match the respiratory motion seen for both patients across multiple image slices. We found the model could decently describe the healthy lung slices, with a minimum average error between corresponding vertices to be 1.66 mm. For the fibrotic lung slices the model was less accurate, maintaining a higher average error across all slices. Using the optimized parameters, we apply the forces predicted from the model using the image trajectory positions for each phase. Although the error is large, the spring constant values determined for the fibrotic patient were not as high as we expected, and more often than not determined to be lower than corresponding healthy lung slices. However, the net force distribution for some of those slices was still found to be greater than the healthy lung counterparts. Other modifications to the model, including additional directional components and which vertices were receiving with the limited sample size available, a clear distinction between the healthy and fibrotic lung cannot yet be made by this model.

  12. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  13. Uncertainty Assessment of the NASA Earth Exchange Global Daily Downscaled Climate Projections (NEX-GDDP) Dataset

    NASA Technical Reports Server (NTRS)

    Wang, Weile; Nemani, Ramakrishna R.; Michaelis, Andrew; Hashimoto, Hirofumi; Dungan, Jennifer L.; Thrasher, Bridget L.; Dixon, Keith W.

    2016-01-01

    The NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset is comprised of downscaled climate projections that are derived from 21 General Circulation Model (GCM) runs conducted under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and across two of the four greenhouse gas emissions scenarios (RCP4.5 and RCP8.5). Each of the climate projections includes daily maximum temperature, minimum temperature, and precipitation for the periods from 1950 through 2100 and the spatial resolution is 0.25 degrees (approximately 25 km x 25 km). The GDDP dataset has received warm welcome from the science community in conducting studies of climate change impacts at local to regional scales, but a comprehensive evaluation of its uncertainties is still missing. In this study, we apply the Perfect Model Experiment framework (Dixon et al. 2016) to quantify the key sources of uncertainties from the observational baseline dataset, the downscaling algorithm, and some intrinsic assumptions (e.g., the stationary assumption) inherent to the statistical downscaling techniques. We developed a set of metrics to evaluate downscaling errors resulted from bias-correction ("quantile-mapping"), spatial disaggregation, as well as the temporal-spatial non-stationarity of climate variability. Our results highlight the spatial disaggregation (or interpolation) errors, which dominate the overall uncertainties of the GDDP dataset, especially over heterogeneous and complex terrains (e.g., mountains and coastal area). In comparison, the temporal errors in the GDDP dataset tend to be more constrained. Our results also indicate that the downscaled daily precipitation also has relatively larger uncertainties than the temperature fields, reflecting the rather stochastic nature of precipitation in space. Therefore, our results provide insights in improving statistical downscaling algorithms and products in the future.

  14. A class of optimum digital phase locked loops for the DSN advanced receiver

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.; Kumar, R.

    1985-01-01

    A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.

  15. Correction of phase-shifting error in wavelength scanning digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-05-01

    Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.

  16. Broadband CARS spectral phase retrieval using a time-domain Kramers–Kronig transform

    PubMed Central

    Liu, Yuexin; Lee, Young Jong; Cicerone, Marcus T.

    2014-01-01

    We describe a closed-form approach for performing a Kramers–Kronig (KK) transform that can be used to rapidly and reliably retrieve the phase, and thus the resonant imaginary component, from a broadband coherent anti-Stokes Raman scattering (CARS) spectrum with a nonflat background. In this approach we transform the frequency-domain data to the time domain, perform an operation that ensures a causality criterion is met, then transform back to the frequency domain. The fact that this method handles causality in the time domain allows us to conveniently account for spectrally varying nonresonant background from CARS as a response function with a finite rise time. A phase error accompanies KK transform of data with finite frequency range. In examples shown here, that phase error leads to small (<1%) errors in the retrieved resonant spectra. PMID:19412273

  17. Minimum emittance in TBA and MBA lattices

    NASA Astrophysics Data System (ADS)

    Xu, Gang; Peng, Yue-Mei

    2015-03-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.

  18. Quasi-Biennial Oscillation and Solar Cycle Influences over the Winter Arctic Simulated by the WACCM4 Model

    NASA Astrophysics Data System (ADS)

    Li, K. F.; Limpasuvan, T. L.; Limpasuvan, V.; Tung, K. K.; Yung, Y. L.

    2017-12-01

    Observations show that the quasi-biennial oscillation (QBO) and the 11-year solar cycle perturb the polar vortex via planetary wave convergence at high latitudes, a mechanism first proposed by Holton and Tan in 1980. Their perturbations lead to increases of stratospheric sudden warming events, and hence observable increases in temperature and ozone abundance in the polar vortex, during the easterly phase of QBO and the solar maximum. Here we simulate the changes in the polar atmosphere using the Whole Atmosphere Community Climate Model 4 (WACCM4) with the prescribed QBO and 11-year solar cycle forcing. The simulation is diagnosed in four groups: westerly QBO phase and solar minimum, westerly QBO phase and solar maximum, easterly QBO phase and solar minimum, and easterly QBO phase and solar maximum. The simulated changes in temperature and ozone are compared with satellite observations.

  19. Combination of Complex-Based and Magnitude-Based Multiecho Water-Fat Separation for Accurate Quantification of Fat-Fraction

    PubMed Central

    Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724

  20. A New Approach to Estimate Forest Parameters Using Dual-Baseline Pol-InSAR Data

    NASA Astrophysics Data System (ADS)

    Bai, L.; Hong, W.; Cao, F.; Zhou, Y.

    2009-04-01

    In POL-InSAR applications using ESPRIT technique, it is assumed that there exist stable scattering centres in the forest. However, the observations in forest severely suffer from volume and temporal decorrelation. The forest scatters are not stable as assumed. The obtained interferometric information is not accurate as expected. Besides, ESPRIT techniques could not identify the interferometric phases corresponding to the ground and the canopy. It provides multiple estimations for the height between two scattering centers due to phase unwrapping. Therefore, estimation errors are introduced to the forest height results. To suppress the two types of errors, we use the dual-baseline POL-InSAR data to estimate forest height. Dual-baseline coherence optimization is applied to obtain interferometric information of stable scattering centers in the forest. From the interferometric phases for different baselines, estimation errors caused by phase unwrapping is solved. Other estimation errors can be suppressed, too. Experiments are done to the ESAR L band POL-InSAR data. Experimental results show the proposed methods provide more accurate forest height than ESPRIT technique.

  1. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  2. ATM Quality of Service Tests for Digitized Video Using ATM Over Satellite: Laboratory Tests

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Brooks, David E.; Frantz, Brian D.

    1997-01-01

    A digitized video application was used to help determine minimum quality of service parameters for asynchronous transfer mode (ATM) over satellite. For these tests, binomially distributed and other errors were digitally inserted in an intermediate frequency link via a satellite modem and a commercial gaussian noise generator. In this paper, the relation- ship between the ATM cell error and cell loss parameter specifications is discussed with regard to this application. In addition, the video-encoding algorithms, test configurations, and results are presented in detail.

  3. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  4. Evaluation of quadrature-phase-shift-keying signal characteristics in W-band radio-over-fiber transmission using direct in-phase/quadrature-phase conversion technique

    NASA Astrophysics Data System (ADS)

    Suzuki, Meisaku; Kanno, Atsushi; Yamamoto, Naokatsu; Sotobayashi, Hideyuki

    2016-02-01

    The effects of in-phase/quadrature-phase (IQ) imbalances are evaluated with a direct IQ down-converter in the W-band (75-110 GHz). The IQ imbalance of the converter is measured within a range of +/-10 degrees in an intermediate frequency of DC-26.5 GHz. 1-8-G-baud quadrature phase-shift keying (QPSK) signals are transmitted successfully with observed bit error rates within a forward error correction limit of 2×10-3 using radio over fiber (RoF) techniques. The direct down-conversion technique is applicable to next-generation high-speed wireless access communication systems in the millimeter-wave band.

  5. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  6. 40 CFR 80.91 - Individual baseline determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...—6.45 Exhaust benzene emissions, complex model—33.03 mg/mile Exhaust toxics emissions, Phase I—50.67 mg/mile Exhaust toxics emissions, Phase II—104.5 mg/mile NOX emissions, Phase I—714.4 mg/mile NOX emissions, Phase II—1461. mg/mile (d) Data collection and testing requirements—(1) Minimum sampling...

  7. 40 CFR 80.91 - Individual baseline determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...—6.45 Exhaust benzene emissions, complex model—33.03 mg/mile Exhaust toxics emissions, Phase I—50.67 mg/mile Exhaust toxics emissions, Phase II—104.5 mg/mile NOX emissions, Phase I—714.4 mg/mile NOX emissions, Phase II—1461. mg/mile (d) Data collection and testing requirements—(1) Minimum sampling...

  8. 40 CFR 80.91 - Individual baseline determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...—6.45 Exhaust benzene emissions, complex model—33.03 mg/mile Exhaust toxics emissions, Phase I—50.67 mg/mile Exhaust toxics emissions, Phase II—104.5 mg/mile NOX emissions, Phase I—714.4 mg/mile NOX emissions, Phase II—1461. mg/mile (d) Data collection and testing requirements—(1) Minimum sampling...

  9. 40 CFR 80.91 - Individual baseline determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...—6.45 Exhaust benzene emissions, complex model—33.03 mg/mile Exhaust toxics emissions, Phase I—50.67 mg/mile Exhaust toxics emissions, Phase II—104.5 mg/mile NOX emissions, Phase I—714.4 mg/mile NOX emissions, Phase II—1461. mg/mile (d) Data collection and testing requirements—(1) Minimum sampling...

  10. Kinematic markers dissociate error correction from sensorimotor realignment during prism adaptation.

    PubMed

    O'Shea, Jacinta; Gaveau, Valérie; Kandel, Matthieu; Koga, Kazuo; Susami, Kenji; Prablanc, Claude; Rossetti, Yves

    2014-03-01

    This study investigated the motor control mechanisms that enable healthy individuals to adapt their pointing movements during prism exposure to a rightward optical shift. In the prism adaptation literature, two processes are typically distinguished. Strategic motor adjustments are thought to drive the pattern of rapid endpoint error correction typically observed during the early stage of prism exposure. This is distinguished from so-called 'true sensorimotor realignment', normally measured with a different pointing task, at the end of prism exposure, which reveals a compensatory leftward 'prism after-effect'. Here, we tested whether each mode of motor compensation - strategic adjustments versus 'true sensorimotor realignment' - could be distinguished, by analyzing patterns of kinematic change during prism exposure. We hypothesized that fast feedforward versus slower feedback error corrective processes would map onto two distinct phases of the reach trajectory. Specifically, we predicted that feedforward adjustments would drive rapid compensation of the initial (acceleration) phase of the reach, resulting in the rapid reduction of endpoint errors typically observed early during prism exposure. By contrast, we expected visual-proprioceptive realignment to unfold more slowly and to reflect feedback influences during the terminal (deceleration) phase of the reach. The results confirmed these hypotheses. Rapid error reduction during the early stage of prism exposure was achieved by trial-by-trial adjustments of the motor plan, which were proportional to the endpoint error feedback from the previous trial. By contrast, compensation of the terminal reach phase unfolded slowly across the duration of prism exposure. Even after 100 trials of pointing through prisms, adaptation was incomplete, with participants continuing to exhibit a small rightward shift in both the reach endpoints and in the terminal phase of reach trajectories. Individual differences in the degree of adaptation of the terminal reach phase predicted the magnitude of prism after-effects. In summary, this study identifies distinct kinematic signatures of fast strategic versus slow sensorimotor realignment processes, which combine to adjust motor performance to compensate for a prismatic shift. © 2013 Elsevier Ltd. All rights reserved.

  11. Aircraft flight test trajectory control

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Walker, R. A.

    1988-01-01

    Two design techniques for linear flight test trajectory controllers (FTTCs) are described: Eigenstructure assignment and the minimum error excitation technique. The two techniques are used to design FTTCs for an F-15 aircraft model for eight different maneuvers at thirty different flight conditions. An evaluation of the FTTCs is presented.

  12. Three filters for visualization of phase objects with large variations of phase gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz

    2009-02-20

    We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.

  13. Stochastic characterization of phase detection algorithms in phase-shifting interferometry

    DOE PAGES

    Munteanu, Florin

    2016-11-01

    Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less

  14. Analytical estimation of laser phase noise induced BER floor in coherent receiver with digital signal processing.

    PubMed

    Vanin, Evgeny; Jacobsen, Gunnar

    2010-03-01

    The Bit-Error-Ratio (BER) floor caused by the laser phase noise in the optical fiber communication system with differential quadrature phase shift keying (DQPSK) and coherent detection followed by digital signal processing (DSP) is analytically evaluated. An in-phase and quadrature (I&Q) receiver with a carrier phase recovery using DSP is considered. The carrier phase recovery is based on a phase estimation of a finite sum (block) of the signal samples raised to the power of four and the phase unwrapping at transitions between blocks. It is demonstrated that errors generated at block transitions cause the dominating contribution to the system BER floor when the impact of the additive noise is negligibly small in comparison with the effect of the laser phase noise. Even the BER floor in the case when the phase unwrapping is omitted is analytically derived and applied to emphasize the crucial importance of this signal processing operation. The analytical results are verified by full Monte Carlo simulations. The BER for another type of DQPSK receiver operation, which is based on differential phase detection, is also obtained in the analytical form using the principle of conditional probability. The principle of conditional probability is justified in the case of differential phase detection due to statistical independency of the laser phase noise induced signal phase error and the additive noise contributions. Based on the achieved analytical results the laser linewidth tolerance is calculated for different system cases.

  15. Liquid crystal point diffraction interferometer. Ph.D. Thesis - Arizona Univ., 1995

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.

    1995-01-01

    A new instrument, the liquid crystal point diffraction-interferometer (LCPDI), has been developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point diffraction interferometer (PDI) and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction. This dissertation describes the theory of both the PDI and liquid crystal phase control. The design considerations for the LCPDI are presented, including manufacturing considerations. The operation and performance of the LCPDI are discussed, including sections regarding alignment, calibration, and amplitude modulation effects. The LCPDI is then demonstrated using two phase objects: defocus difference wavefront, and a temperature distribution across a heated chamber filled with silicone oil. The measured results are compared to theoretical or independently measured results and show excellent agreement. A computer simulation of the LCPDI was performed to verify the source of observed periodic phase measurement error. The error stems from intensity variations caused by dye molecules rotating within the liquid crystal layer. Methods are discussed for reducing this error. Algorithms are presented which reduce this error; they are also useful for any phase-stepping interferometer that has unwanted intensity fluctuations, such as those caused by unregulated lasers.

  16. PLATFORM DEFORMATION PHASE CORRECTION FOR THE AMiBA-13 COPLANAR INTERFEROMETER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Yu-Wei; Lin, Kai-Yang; Huang, Yau-De

    2013-05-20

    We present a new way to solve the platform deformation problem of coplanar interferometers. The platform of a coplanar interferometer can be deformed due to driving forces and gravity. A deformed platform will induce extra components into the geometric delay of each baseline and change the phases of observed visibilities. The reconstructed images will also be diluted due to the errors of the phases. The platform deformations of The Yuan-Tseh Lee Array for Microwave Background Anisotropy (AMiBA) were modeled based on photogrammetry data with about 20 mount pointing positions. We then used the differential optical pointing error between two opticalmore » telescopes to fit the model parameters in the entire horizontal coordinate space. With the platform deformation model, we can predict the errors of the geometric phase delays due to platform deformation with a given azimuth and elevation of the targets and calibrators. After correcting the phases of the radio point sources in the AMiBA interferometric data, we recover 50%-70% flux loss due to phase errors. This allows us to restore more than 90% of a source flux. The method outlined in this work is not only applicable to the correction of deformation for other coplanar telescopes but also to single-dish telescopes with deformation problems. This work also forms the basis of the upcoming science results of AMiBA-13.« less

  17. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  18. Testing and Calibration of Phase Plates for JWST Optical Simulator

    NASA Technical Reports Server (NTRS)

    Gong, Qian; Chu, Jenny; Tournois, Severine; Eichhorn, William; Kubalak, David

    2011-01-01

    Three phase plates were designed to simulate the JWST segmented primary mirror wavefront at three on-orbit alignment stages: coarse phasing, intermediate phasing, and fine phasing. The purpose is to verify JWST's on-orbit wavefront sensing capability. Amongst the three stages, coarse alignment is defined to have piston error between adjacent segments being 30 m to 300 m, intermediate being 0.4 m to 10 m, and fine is below 0.4 m. The phase plates were made of fused silica, and were assembled in JWST Optical Simulator (OSIM). The piston difference was realized by the thickness difference of two adjacent segments. The two important parameters to phase plates are piston and wavefront errors. Dispersed Fringe Sensor (DFS) method was used for initial coarse piston evaluation, which is the emphasis of this paper. Point Diffraction Interferometer (PDI) is used for fine piston and wavefront error. In order to remove piston's 2 pi uncertainty with PDI, three laser wavelengths, 640nm, 660nm, and 780nm, are used for the measurement. The DHS test setup, analysis algorithm and results are presented. The phase plate design concept and its application (i.e. verifying the JWST on-orbit alignment algorithm) are described. The layout of JWST OSIM and the function of phase plates in OSIM are also addressed briefly.

  19. Writing executable assertions to test flight software

    NASA Technical Reports Server (NTRS)

    Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.

    1984-01-01

    An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.

  20. Turboprop+: enhanced Turboprop diffusion-weighted imaging with a new phase correction.

    PubMed

    Lee, Chu-Yu; Li, Zhiqiang; Pipe, James G; Debbins, Josef P

    2013-08-01

    Faster periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) diffusion-weighted imaging acquisitions, such as Turboprop and X-prop, remain subject to phase errors inherent to a gradient echo readout, which ultimately limits the applied turbo factor (number of gradient echoes between each pair of radiofrequency refocusing pulses) and, thus, scan time reductions. This study introduces a new phase correction to Turboprop, called Turboprop+. This technique employs calibration blades, which generate 2-D phase error maps and are rotated in accordance with the data blades, to correct phase errors arising from off-resonance and system imperfections. The results demonstrate that with a small increase in scan time for collecting calibration blades, Turboprop+ had a superior immunity to the off-resonance-related artifacts when compared to standard Turboprop and recently proposed X-prop with the high turbo factor (turbo factor = 7). Thus, low specific absorption rate and short scan time can be achieved in Turboprop+ using a high turbo factor, whereas off-resonance related artifacts are minimized. © 2012 Wiley Periodicals, Inc.

  1. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  2. Predictability of CFSv2 in the tropical Indo-Pacific region, at daily and subseasonal time scales

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, V.

    2018-06-01

    The predictability of a coupled climate model is evaluated at daily and intraseasonal time scales in the tropical Indo-Pacific region during boreal summer and winter. This study has assessed the daily retrospective forecasts of the Climate Forecast System version 2 from the National Centers of Environmental Prediction for the period 1982-2010. The growth of errors in the forecasts of daily precipitation, monsoon intraseasonal oscillation (MISO) and the Madden-Julian oscillation (MJO) is studied. The seasonal cycle of the daily climatology of precipitation is reasonably well predicted except for the underestimation during the peak of summer. The anomalies follow the typical pattern of error growth in nonlinear systems and show no difference between summer and winter. The initial errors in all the cases are found to be in the nonlinear phase of the error growth. The doubling time of small errors is estimated by applying Lorenz error formula. For summer and winter, the doubling time of the forecast errors is in the range of 4-7 and 5-14 days while the doubling time of the predictability errors is 6-8 and 8-14 days, respectively. The doubling time in MISO during the summer and MJO during the winter is in the range of 12-14 days, indicating higher predictability and providing optimism for long-range prediction. There is no significant difference in the growth of forecasts errors originating from different phases of MISO and MJO, although the prediction of the active phase seems to be slightly better.

  3. Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink

    DTIC Science & Technology

    2013-03-21

    number of transmitters). C = log2 ∣∣∣∣∣INr + EsNtN0 HHH ∣∣∣∣∣ (2.32) In the signal to noise ratio, Es represents the total energy from all transmitters...channel matrix pseudo-inverse is computed by (2.36) [6, p. 970] 31 H+ = ( HHH )−1HH. (2.36) 2.6.5 Minimum Mean-Squared Error Detection. Minimum Mean Squared...H† = ( HHH + Nt SNR I )−1 HH . (3.14) Equation (3.14) was defined in [2] as an implementation of a MMSE equalizer, and was applied to the received

  4. Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.

    PubMed

    Hampson, R E; Deadwyler, S A

    1996-11-26

    Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.

  5. Chemotherapy Order Entry by a Clinical Support Pharmacy Technician in an Outpatient Medical Day Unit

    PubMed Central

    Neville, Heather; Broadfield, Larry; Harding, Claudia; Heukshorst, Shelley; Sweetapple, Jennifer; Rolle, Megan

    2016-01-01

    Background: Pharmacy technicians are expanding their scope of practice, often in partnership with pharmacists. In oncology, such a shift in responsibilities may lead to workflow efficiencies, but may also cause concerns about patient risk and medication errors. Objectives: The primary objective was to compare the time spent on order entry and order-entry checking before and after training of a clinical support pharmacy technician (CSPT) to perform chemotherapy order entry. The secondary objectives were to document workflow interruptions and to assess medication errors. Methods: This before-and-after observational study investigated chemotherapy order entry for ambulatory oncology patients. Order entry was performed by pharmacists before the process change (phase 1) and by 1 CSPT after the change (phase 2); order-entry checking was performed by a pharmacist during both phases. The tasks were timed by an independent observer using a personal digital assistant. A convenience sample of 125 orders was targeted for each phase. Data were exported to Microsoft Excel software, and timing differences for each task were tested with an unpaired t test. Results: Totals of 143 and 128 individual orders were timed for order entry during phase 1 (pharmacist) and phase 2 (CSPT), respectively. The mean total time to perform order entry was greater during phase 1 (1:37 min versus 1:20 min; p = 0.044). Totals of 144 and 122 individual orders were timed for order-entry checking (by a pharmacist) in phases 1 and 2, respectively, and there was no difference in mean total time for order-entry checking (1:21 min versus 1:20 min; p = 0.69). There were 33 interruptions not related to order entry (totalling 39:38 min) during phase 1 and 25 interruptions (totalling 30:08 min) during phase 2. Three errors were observed during order entry in phase 1 and one error during order-entry checking in phase 2; the errors were rated as having no effect on patient care. Conclusions: Chemotherapy order entry by a trained CSPT appeared to be just as safe and efficient as order entry by a pharmacist. Changes in pharmacy technicians’ scope of practice could increase the amount of time available for pharmacists to provide direct patient care in the oncology setting. PMID:27402999

  6. Chemotherapy Order Entry by a Clinical Support Pharmacy Technician in an Outpatient Medical Day Unit.

    PubMed

    Neville, Heather; Broadfield, Larry; Harding, Claudia; Heukshorst, Shelley; Sweetapple, Jennifer; Rolle, Megan

    2016-01-01

    Pharmacy technicians are expanding their scope of practice, often in partnership with pharmacists. In oncology, such a shift in responsibilities may lead to workflow efficiencies, but may also cause concerns about patient risk and medication errors. The primary objective was to compare the time spent on order entry and order-entry checking before and after training of a clinical support pharmacy technician (CSPT) to perform chemotherapy order entry. The secondary objectives were to document workflow interruptions and to assess medication errors. This before-and-after observational study investigated chemotherapy order entry for ambulatory oncology patients. Order entry was performed by pharmacists before the process change (phase 1) and by 1 CSPT after the change (phase 2); order-entry checking was performed by a pharmacist during both phases. The tasks were timed by an independent observer using a personal digital assistant. A convenience sample of 125 orders was targeted for each phase. Data were exported to Microsoft Excel software, and timing differences for each task were tested with an unpaired t test. Totals of 143 and 128 individual orders were timed for order entry during phase 1 (pharmacist) and phase 2 (CSPT), respectively. The mean total time to perform order entry was greater during phase 1 (1:37 min versus 1:20 min; p = 0.044). Totals of 144 and 122 individual orders were timed for order-entry checking (by a pharmacist) in phases 1 and 2, respectively, and there was no difference in mean total time for order-entry checking (1:21 min versus 1:20 min; p = 0.69). There were 33 interruptions not related to order entry (totalling 39:38 min) during phase 1 and 25 interruptions (totalling 30:08 min) during phase 2. Three errors were observed during order entry in phase 1 and one error during order-entry checking in phase 2; the errors were rated as having no effect on patient care. Chemotherapy order entry by a trained CSPT appeared to be just as safe and efficient as order entry by a pharmacist. Changes in pharmacy technicians' scope of practice could increase the amount of time available for pharmacists to provide direct patient care in the oncology setting.

  7. Parametric study of minimum converter loss in an energy-storage dc-to-dc converter

    NASA Technical Reports Server (NTRS)

    Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.

    1982-01-01

    Through a combination of analytical and numerical minimization procedures, a converter design that results in the minimum total converter loss (including core loss, winding loss, capacitor and energy-storage-reactor loss, and various losses in the semiconductor switches) is obtained. Because the initial phase involves analytical minimization, the computation time required by the subsequent phase of numerical minimization is considerably reduced in this combination approach. The effects of various loss parameters on the optimum values of the design variables are also examined.

  8. Tomography of a displacement photon counter for discrimination of single-rail optical qubits

    NASA Astrophysics Data System (ADS)

    Izumi, Shuro; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.

    2018-04-01

    We investigate the performance of a detection strategy composed of a displacement operation and a photon counter, which is known as a beneficial tool in optical coherent communications, to the quantum state discrimination of the two superpositions of vacuum and single photon states corresponding to the {\\hat{σ }}x eigenstates in the single-rail encoding of photonic qubits. We experimentally characterize the detection strategy in vacuum-single photon two-dimensional space using quantum detector tomography and evaluate the achievable discrimination error probability from the reconstructed measurement operators. We furthermore derive the minimum error rate obtainable with Gaussian transformations and homodyne detection. Our proof-of-principle experiment shows that the proposed scheme can achieve a discrimination error surpassing homodyne detection.

  9. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  10. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  11. Frontal midline theta and the error-related negativity: neurophysiological mechanisms of action regulation.

    PubMed

    Luu, Phan; Tucker, Don M; Makeig, Scott

    2004-08-01

    The error-related negativity (ERN) is an event-related potential (ERP) peak occurring between 50 and 100 ms after the commission of a speeded motor response that the subject immediately realizes to be in error. The ERN is believed to index brain processes that monitor action outcomes. Our previous analyses of ERP and EEG data suggested that the ERN is dominated by partial phase-locking of intermittent theta-band EEG activity. In this paper, this possibility is further evaluated. The possibility that the ERN is produced by phase-locking of theta-band EEG activity was examined by analyzing the single-trial EEG traces from a forced-choice speeded response paradigm before and after applying theta-band (4-7 Hz) filtering and by comparing the averaged and single-trial phase-locked (ERP) and non-phase-locked (other) EEG data. Electrical source analyses were used to estimate the brain sources involved in the generation of the ERN. Beginning just before incorrect button presses in a speeded choice response paradigm, midfrontal theta-band activity increased in amplitude and became partially and transiently phase-locked to the subject's motor response, accounting for 57% of ERN peak amplitude. The portion of the theta-EEG activity increase remaining after subtracting the response-locked ERP from each trial was larger and longer lasting after error responses than after correct responses, extending on average 400 ms beyond the ERN peak. Multiple equivalent-dipole source analysis suggested 3 possible equivalent dipole sources of the theta-bandpassed ERN, while the scalp distribution of non-phase-locked theta amplitude suggested the presence of additional frontal theta-EEG sources. These results appear consistent with a body of research that demonstrates a relationship between limbic theta activity and action regulation, including error monitoring and learning.

  12. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    PubMed

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  13. Impact of number of co-existing rotors and inter-electrode distance on accuracy of rotor localization☆,☆☆

    PubMed Central

    Aronis, Konstantinos N.; Ashikaga, Hiroshi

    2018-01-01

    Background Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9–11 mm) inter-electrode distance (IED) of the multi-electrode basket catheter. Methods and results We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2–30 mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED = 1 mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10 mm, the error is 3.8 mm (phase singularity), 3.7 mm (dominant frequency), and 11.8 mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10 mm ranges from 30.0 mm (two rotors) to 96.1 mm (five rotors). Conclusions The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. PMID:28988690

  14. Impact of number of co-existing rotors and inter-electrode distance on accuracy of rotor localization.

    PubMed

    Aronis, Konstantinos N; Ashikaga, Hiroshi

    Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9-11mm) inter-electrode distance (IED) of the multi-electrode basket catheter. We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2-30mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED=1mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10mm, the error is 3.8mm (phase singularity), 3.7mm (dominant frequency), and 11.8mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10mm ranges from 30.0mm (two rotors) to 96.1mm (five rotors). The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  16. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  17. An Alternative Mechanism for the Dimerization of Formic Acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brinkman, Nicole R.; Tschumper, Gregory; Yan, Ge

    Gas-phase formic acid exists primarily as a cyclic dimer. The mechanism of dimerization has been traditionally considered to be a synchronous process; however, recent experimental findings suggest a possible alternative mechanism by which two formic acid monomers proceed through an acyclic dimer to the cyclic dimer in a stepwise process. To investigate this newly proposed process of dimerization in formic acid, density functional theory and second-order Moeller-Plesset perturbation theory (MP2) have been used to optimize cis and trans monomers of formic acid, the acyclic and cyclic dimers, and the acyclic and cyclic transition states between minima. Single-point energies of themore » trans monomer, dimer minima, and transition states at the MP2/TZ2P+diff optimized geometries were computed at the coupled-cluster level of theory including singles and doubles with perturbatively applied triple excitations [CCSD(T)] with an aug-cc-pVTZ basis set to obtain an accurate determination of energy barriers and dissociation energies. A counterpoise correction was performed to determine an estimate of the basis set superposition error in computing relative energies. The explicitly correlated MP2 method of Kutzelnigg and Klopper (MP2-R12) was used to provide an independent means for obtaining the MP2 one-particle limit. The cyclic minimum is predicted to be 6.3 kcal/mol more stable than the acyclic minimum, and the barrier to double proton transfer is 7.1 kcal/mol.« less

  18. Quasispecies dynamics on a network of interacting genotypes and idiotypes: formulation of the model

    NASA Astrophysics Data System (ADS)

    Barbosa, Valmir C.; Donangelo, Raul; Souza, Sergio R.

    2015-01-01

    A quasispecies is the stationary state of a set of interrelated genotypes that evolve according to the usual principles of selection and mutation. Quasispecies studies have for the most part concentrated on the possibility of errors during genotype replication and their role in promoting either the survival or the demise of the quasispecies. In a previous work, we introduced a network model of quasispecies dynamics, based on a single probability parameter (p) and capable of addressing several plausibility issues of previous models. Here we extend that model by pairing its network with another one aimed at modeling the dynamics of the immune system when confronted with the quasispecies. The new network is based on the idiotypic-network model of immunity and, together with the previous one, constitutes a network model of interacting genotypes and idiotypes. The resulting model requires further parameters and as a consequence leads to a vast phase space. We have focused on a particular niche in which it is possible to observe the trade-offs involved in the quasispecies' survival or destruction. Within this niche, we give simulation results that highlight some key preconditions for quasispecies survival. These include a minimum initial abundance of genotypes relative to that of the idiotypes and a minimum value of p. The latter, in particular, is to be contrasted with the stand-alone quasispecies network of our previous work, in which arbitrarily low values of p constitute a guarantee of quasispecies survival.

  19. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination.

    PubMed

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-02-20

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments.

  20. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination

    PubMed Central

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-01-01

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. PMID:28230753

  1. Smart-Divert Powered Descent Guidance to Avoid the Backshell Landing Dispersion Ellipse

    NASA Technical Reports Server (NTRS)

    Carson, John M.; Acikmese, Behcet

    2013-01-01

    A smart-divert capability has been added into the Powered Descent Guidance (PDG) software originally developed for Mars pinpoint and precision landing. The smart-divert algorithm accounts for the landing dispersions of the entry backshell, which separates from the lander vehicle at the end of the parachute descent phase and prior to powered descent. The smart-divert PDG algorithm utilizes the onboard fuel and vehicle thrust vectoring to mitigate landing error in an intelligent way: ensuring that the lander touches down with minimum- fuel usage at the minimum distance from the desired landing location that also avoids impact by the descending backshell. The smart-divert PDG software implements a computationally efficient, convex formulation of the powered-descent guidance problem to provide pinpoint or precision-landing guidance solutions that are fuel-optimal and satisfy physical thrust bound and pointing constraints, as well as position and speed constraints. The initial smart-divert implementation enforced a lateral-divert corridor parallel to the ground velocity vector; this was based on guidance requirements for MSL (Mars Science Laboratory) landings. This initial method was overly conservative since the divert corridor was infinite in the down-range direction despite the backshell landing inside a calculable dispersion ellipse. Basing the divert constraint instead on a local tangent to the backshell dispersion ellipse in the direction of the desired landing site provides a far less conservative constraint. The resulting enhanced smart-divert PDG algorithm avoids impact with the descending backshell and has reduced conservatism.

  2. Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1985-01-01

    Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.

  3. Apparatus and Method to Enable Precision and Fast Laser Frequency Tuning

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R. (Inventor); Numata, Kenji (Inventor); Wu, Stewart T. (Inventor); Yang, Guangning (Inventor)

    2015-01-01

    An apparatus and method is provided to enable precision and fast laser frequency tuning. For instance, a fast tunable slave laser may be dynamically offset-locked to a reference laser line using an optical phase-locked loop. The slave laser is heterodyned against a reference laser line to generate a beatnote that is subsequently frequency divided. The phase difference between the divided beatnote and a reference signal may be detected to generate an error signal proportional to the phase difference. The error signal is converted into appropriate feedback signals to phase lock the divided beatnote to the reference signal. The slave laser frequency target may be rapidly changed based on a combination of a dynamically changing frequency of the reference signal, the frequency dividing factor, and an effective polarity of the error signal. Feed-forward signals may be generated to accelerate the slave laser frequency switching through laser tuning ports.

  4. Error catastrophe and phase transition in the empirical fitness landscape of HIV

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-03-01

    We have translated clinical sequence databases of the p6 HIV protein into an empirical fitness landscape quantifying viral replicative capacity as a function of the amino acid sequence. We show that the viral population resides close to a phase transition in sequence space corresponding to an "error catastrophe" beyond which there is lethal accumulation of mutations. Our model predicts that the phase transition may be induced by drug therapies that elevate the mutation rate, or by forcing mutations at particular amino acids. Applying immune pressure to any combination of killer T-cell targets cannot induce the transition, providing a rationale for why the viral protein can exist close to the error catastrophe without sustaining fatal fitness penalties due to adaptive immunity.

  5. Robustness of speckle imaging techniques applied to horizontal imaging scenarios

    NASA Astrophysics Data System (ADS)

    Bos, Jeremy P.

    Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

  6. Some Insights of Spectral Optimization in Ocean Color Inversion

    NASA Technical Reports Server (NTRS)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  7. Efficiently explore the energy landscape of proteins in molecular dynamics simulations by amplifying collective motions

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Zhang, Zhiyong; Shi, Yunyu; Liu, Haiyan

    2003-08-01

    We describe a method for efficient sampling of the energy landscape of a protein in atomic molecular dynamics simulations. A simulation is divided into alternatively occurring relaxation phases and excitation phases. In the relaxation phase (conventional simulation), we use a frequently updated reference structure and deviations from this reference structure to mark whether the system has been trapped in a local minimum. In that case, the simulation enters the excitation phase, during which a few slow collective modes of the system are coupled to a higher temperature bath. After the system has escaped from the minimum (also judged by deviations from the reference structure) the simulation reenters the relaxation phase. The collective modes are obtained from a coarse-grained Gaussian elastic network model. The scheme, which we call ACM-AME (amplified collective motion-assisted minimum escaping), is compared with conventional simulations as well as an alternative scheme that elevates the temperature of all degrees of freedom during the excitation phase (amplified overall motion-assisted minimum escaping, or AOM-AME). Comparison is made using simulations on four peptides starting from non-native extended or all helical structures. In terms of sampling low energy conformations and continuously sampling new conformations throughout a simulation, the ACM-AME scheme demonstrates very good performance while the AOM-AME scheme shows little improvement upon conventional simulations. Limited success is achieved in producing structures close to the native structures of the peptides: for an S-peptide analog, the ACM-AME approach is able to reproduce its native helical structure, and starting from an all-helical structure of the villin headpiece subdomain (HP-36) in implicit solvent, two out of three 150 ns ACM-AME runs are able to sample structures with 3-4 Å backbone root-mean-square deviations from the nuclear magnetic resonance structure of the protein.

  8. An analysis of carrier phase jitter in an MPSK receiver utilizing map estimation. Ph.D. Thesis Semiannual Status Report, Jul. 1993 - Jan. 1994

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1994-01-01

    The use of 8 and 16 PSK TCM to support satellite communications in an effort to achieve more bandwidth efficiency in a power-limited channel has been proposed. This project addresses the problem of carrier phase jitter in an M-PSK receiver utilizing the high SNR approximation to the maximum aposteriori estimation of carrier phase. In particular, numerical solutions to the 8 and 16 PSK self-noise and phase detector gain in the carrier tracking loop are presented. The effect of changing SNR on the loop noise bandwidth is also discussed. These data are then used to compute variance of phase error as a function of SNR. Simulation and hardware data are used to verify these calculations. The results show that there is a threshold in the variance of phase error versus SNR curves that is a strong function of SNR and a weak function of loop bandwidth. The M-PSK variance thresholds occur at SNR's in the range of practical interest for the use of 8 and 16-PSK TCM. This suggests that phase error variance is an important consideration in the design of these systems.

  9. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.

  10. Variation principle in calculating the flow of a two-phase mixture in the pipes of the cooling systems in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Aksenov, Andrey; Malysheva, Anna

    2018-03-01

    The analytical solution of one of the urgent problems of modern hydromechanics and heat engineering about the distribution of gas and liquid phases along the channel cross-section, the thickness of the annular layer and their connection with the mass content of the gas phase in the gas-liquid flow is given in the paper.The analytical method is based on the fundamental laws of theoretical mechanics and thermophysics on the minimum of energy dissipation and the minimum rate of increase in the system entropy, which determine the stability of stationary states and processes. Obtained dependencies disclose the physical laws of the motion of two-phase media and can be used in hydraulic calculations during the design and operation of refrigeration and air conditioning systems.

  11. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  12. Differential phase measurements of D-region partial reflections

    NASA Technical Reports Server (NTRS)

    Wiersma, D. J.; Sechrist, C. F., Jr.

    1972-01-01

    Differential phase partial reflection measurements were used to deduce D region electron density profiles. The phase difference was measured by taking sums and differences of amplitudes received on an array of crossed dipoles. The reflection model used was derived from Fresnel reflection theory. Seven profiles obtained over the period from 13 October 1971 to 5 November 1971 are presented, along with the results from simultaneous measurements of differential absorption. Some possible sources of error and error propagation are discussed. A collision frequency profile was deduced from the electron concentration calculated from differential phase and differential absorption.

  13. Minimum visual requirements in different occupations in Finland.

    PubMed

    Aine, E

    1984-01-01

    In Finland the employers can individually fix the minimum visual requirements for their personnel in almost every occupation. In transportation, in police and national defence proper eyesight is regarded so important that strict visual requirements for these have been fixed by the Government. The regulations are often more close when accepting the person to the occupation than later on when working. The minimum requirements are mostly stated for visual acuity, colour perception and visual fields. In some occupations the regulations concern also the refractive error of the eyes and possible eye diseases. In aviation the regulations have been stated by the International Civil Aviation Organization ( ICAO ). The minimum visual requirements for a driving license in highway traffic are classed according to the types of motor vehicles. In railways , maritime commerce and national defence the task of the worker determines the specified regulations. The policeman must have a distant visual acuity of 0.5 without eyeglasses in both eyes and nearly normal colour perception when starting the training course.

  14. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  15. Evaluation of Infrared Target Discrimination Algorithms.

    DTIC Science & Technology

    1983-04-01

    application of this work is embodied in a computer program called PALANTIR , which Ref. 2 also describes in some detail. From a given set of narrow band spectral...chan- nels PALANTIR chooses a prescribed number of channels, picking those that will provide the least error when used in connection with a minimum

  16. Type I Rehearsal and Recognition.

    ERIC Educational Resources Information Center

    Glenberg, Arthur; Adams, Frederick

    1978-01-01

    Rote, repetitive Type I Rehearsal is defined as the continuous maintenance of information in memory using the minimum cognitive capacity necessary for maintenance. An analysis of errors made on a forced-choice recognition test supported the hypothesis that acoustic-phonemic components of the memory trace are added or strengthened by this…

  17. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  18. Loran digital phase-locked loop and RF front-end system error analysis

    NASA Technical Reports Server (NTRS)

    Mccall, D. L.

    1979-01-01

    An analysis of the system performance of the digital phase locked loops (DPLL) and RF front end that are implemented in the MINI-L4 Loran receiver is presented. Three of the four experiments deal with the performance of the digital phase locked loops. The other experiment deals with the RF front end and DPLL system error which arise in the front end due to poor signal to noise ratios. The ability of the DPLLs to track the offsets is studied.

  19. [Effect of the development phase and growth rate of a Shigella sonnei population on the reproduction of homologous bacteriophage].

    PubMed

    Voroshilova, N N; Kazakova, T B

    1983-04-01

    This study showed that the minimum latent period (20 minutes) of the intracellular multiplication of dysentery bacteriophage S-9 in the population of S. sonnei substrate strain under the conditions of static heterogeneous surface batch cultivation was observed at the end of the lag phase and at the growth acceleration phase, in the first and second thirds of the exponential curve, while the maximum latent period (35-40 minutes) was observed at the stationary phase. The maximum yield of phage S-9 from one infected bacterial cell (628.3 +/- 116.8) was registered during the first third of the phase of the exponential growth of the bacterial population and the minimum yield (18.66 +/- 6.6), at the beginning of the lag phase. The significant direct correlation between the specific growth rate of the bacterial population and the yield of the phage from one infected bacterial cell at the end of the lag phase, at the growth acceleration and deceleration phases, as well as the significant inverse correlation between the yield of the phage and the time of the generation of the bacterial population at the growth acceleration phase were established.

  20. Prediction of transmission distortion for wireless video communication: analysis.

    PubMed

    Chen, Zhifeng; Wu, Dapeng

    2012-03-01

    Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.

Top