Monteiro, Heloísa Mirelle Costa; de Mendonça, Débora Carneiro; Sousa, Mariana Séfora Bezerra; Amancio-Dos-Santos, Angela
2018-06-01
This investigation studied whether physical exercise could modulate cortical spreading depression (CSD) propagation velocity in adult rat offspring from dams that had received a high-fat (HF) diet during lactation. Wistar male rats suckled by dams fed either control (C) or HF diet ad libitum. After weaning, pups received standard laboratory chow. From 40 to 60 days of life, half of the animals exercised on a treadmill (group E); the other half remained sedentary (group S). Two additional HF groups (E and S) received fluoxetine (F; 10 mg/kg/day, orogastrically) from 40 to 60 days of life (groups HF/EF and HF/F). At 40 days of life, rats from the maternal HF diet presented higher weight, thoracic circumference, and Lee Index than C animals and remained heavier at 60 days of life. Physical exercise decreased abdominal circumference. HF diet increased CSD propagation velocity (mean ± SD; mm/min) in sedentaries (HF/S 3.47 ± 0.31 versus C/S 3.24 ± 0.26). Treadmill exercise decelerated CSD propagation in both groups C/E (2.94 ± 0.28) and HF/E (2.97 ± 0.40). Fluoxetine alone decreased CSD propagation (HF/F 2.88 ± 0.45) compared with HF/S group. The combination of fluoxetine + exercise under HF condition (2.98 ± 0.27) was similar to HF/E group. Physical exercise is able to reduce CSD propagation velocity in rat adult brains even when they have suffered over-nourishing during lactation. The effects of exercise alone or fluoxetine alone on CSD were similar to the effects of fluoxetine + exercise, under the HF condition. Data reinforce malnutrition during lactation modifies cortical electrophysiology even when the HF condition no longer exists.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Error amplification to promote motor learning and motivation in therapy robotics.
Shirzad, Navid; Van der Loos, H F Machiel
2012-01-01
To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
Scellier, Benjamin; Bengio, Yoshua
2017-01-01
We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. PMID:28522969
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
An automated workflow for patient-specific quality control of contour propagation
NASA Astrophysics Data System (ADS)
Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel
2016-12-01
Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
The first Australian gravimetric quasigeoid model with location-specific uncertainty estimates
NASA Astrophysics Data System (ADS)
Featherstone, W. E.; McCubbine, J. C.; Brown, N. J.; Claessens, S. J.; Filmer, M. S.; Kirby, J. F.
2018-02-01
We describe the computation of the first Australian quasigeoid model to include error estimates as a function of location that have been propagated from uncertainties in the EGM2008 global model, land and altimeter-derived gravity anomalies and terrain corrections. The model has been extended to include Australia's offshore territories and maritime boundaries using newer datasets comprising an additional {˜ }280,000 land gravity observations, a newer altimeter-derived marine gravity anomaly grid, and terrain corrections at 1^' ' }× 1^' ' } resolution. The error propagation uses a remove-restore approach, where the EGM2008 quasigeoid and gravity anomaly error grids are augmented by errors propagated through a modified Stokes integral from the errors in the altimeter gravity anomalies, land gravity observations and terrain corrections. The gravimetric quasigeoid errors (one sigma) are 50-60 mm across most of the Australian landmass, increasing to {˜ }100 mm in regions of steep horizontal gravity gradients or the mountains, and are commensurate with external estimates.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Bittel, Daniel C; Bittel, Adam J; Williams, Christine; Elazzazi, Ashraf
2017-05-01
Proper exercise form is critical for the safety and efficacy of therapeutic exercise. This research examines if a novel smartphone application, designed to monitor and provide real-time corrections during resistance training, can reduce performance errors and elicit a motor learning response. Forty-two participants aged 18 to 65 years were randomly assigned to treatment and control groups. Both groups were tested for the number of movement errors made during a 10-repetition set completed at baseline, immediately after, and 1 to 2 weeks after a single training session of knee extensions. The treatment group trained with real-time, smartphone-generated feedback, whereas the control subjects did not. Group performance (number of errors) was compared across test sets using a 2-factor mixed-model analysis of variance. No differences were observed between groups for age, sex, or resistance training experience. There was a significant interaction between test set and group. The treatment group demonstrated fewer errors on posttests 1 and 2 compared with pretest (P < 0.05). There was no reduction in the number of errors on any posttest for control subjects. Smartphone apps, such as the one used in this study, may enhance patient supervision, safety, and exercise efficacy across rehabilitation settings. A single training session with the app promoted motor learning and improved exercise performance.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
An introduction of component fusion extend Kalman filtering method
NASA Astrophysics Data System (ADS)
Geng, Yue; Lei, Xusheng
2018-05-01
In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.
Topic-Aware Physical Activity Propagation in a Health Social Network
Phan, Nhathai; Ebrahimi, Javid; Kil, Dave; Piniewski, Brigitte; Dou, Dejing
2016-01-01
Modeling physical activity propagation, such as physical exercise level and intensity, is the key to preventing the conduct that can lead to obesity; it can also help spread wellness behavior in a social network. PMID:27087794
An experimental study of fault propagation in a jet-engine controller. M.S. Thesis
NASA Technical Reports Server (NTRS)
Choi, Gwan Seung
1990-01-01
An experimental analysis of the impact of transient faults on a microprocessor-based jet engine controller, used in the Boeing 747 and 757 aircrafts is described. A hierarchical simulation environment which allows the injection of transients during run-time and the tracing of their impact is described. Verification of the accuracy of this approach is also provided. A determination of the probability that a transient results in latch, pin or functional errors is made. Given a transient fault, there is approximately an 80 percent chance that there is no impact on the chip. An empirical model to depict the process of error exploration and degeneration in the target system is derived. The model shows that, if no latch errors occur within eight clock cycles, no significant damage is likely to happen. Thus, the overall impact of a transient is well contained. A state transition model is also derived from the measured data, to describe the error propagation characteristics within the chip, and to quantify the impact of transients on the external environment. The model is used to identify and isolate the critical fault propagation paths, the module most sensitive to fault propagation and the module with the highest potential of causing external pin errors.
Numerical ‘health check’ for scientific codes: the CADNA approach
NASA Astrophysics Data System (ADS)
Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.
2007-04-01
Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.
Uncertainty Propagation in OMFIT
NASA Astrophysics Data System (ADS)
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
A wide-angle high Mach number modal expansion for infrasound propagation.
Assink, Jelle; Waxler, Roger; Velea, Doru
2017-03-01
The use of modal expansions to solve the problem of atmospheric infrasound propagation is revisited. A different form of the associated modal equation is introduced, valid for wide-angle propagation in atmospheres with high Mach number flow. The modal equation can be formulated as a quadratic eigenvalue problem for which there are simple and efficient numerical implementations. A perturbation expansion for the treatment of attenuation, valid for stratified media with background flow, is derived as well. Comparisons are carried out between the proposed algorithm and a modal algorithm assuming an effective sound speed, including a real data case study. The comparisons show that the effective sound speed approximation overestimates the effect of horizontal wind on sound propagation, leading to errors in traveltime, propagation path, trace velocity, and absorption. The error is found to be dependent on propagation angle and Mach number.
Error propagation in eigenimage filtering.
Soltanian-Zadeh, H; Windham, J P; Jenkins, J M
1990-01-01
Mathematical derivation of error (noise) propagation in eigenimage filtering is presented. Based on the mathematical expressions, a method for decreasing the propagated noise given a sequence of images is suggested. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the final composite image are compared to the SNRs and CNRs of the images in the sequence. The consistency of the assumptions and accuracy of the mathematical expressions are investigated using sequences of simulated and real magnetic resonance (MR) images of an agarose phantom and a human brain.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder
ERIC Educational Resources Information Center
Hall, Steven T.; Post, Christopher J.
2009-01-01
Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2009-01-01
In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.
Running coupling constant from lattice studies of gluon and ghost propagators
NASA Astrophysics Data System (ADS)
Cucchieri, A.; Mendes, T.
2004-12-01
We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.
Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D
2010-12-01
To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Spaceborne Differential GPS Applications
2000-02-17
passive vehicle to the rela- tive filter. The Clohessy - Wiltshire equations are used for state and error propagation. This filter has been designed using...such as the satellite clock er- ror. Furthermore, directly estimating a relative state allows the use of the Clohessy - Wiltshire (CW) equa- tions...allows the use of the Clohessy - Wiltshire (CW) equations for state and error propagation. In fact, in its current form the relative filter requires no
Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Weaver, Aaron S.
2003-01-01
Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.
Nassar, Cíntia Cristina Souza; Bondan, Eduardo Fernandes; Alouche, Sandra Regina
2009-09-01
Multiple sclerosis is a demyelinating disease of the central nervous system associated with varied levels of disability. The impact of early physiotherapeutic interventions in the disease progression is unknown. We used an experimental model of demyelination with the gliotoxic agent ethidium bromide and early aquatic exercises to evaluate the motor performance of the animals. We quantified the number of footsteps and errors during the beam walking test. The demyelinated animals walked fewer steps with a greater number of errors than the control group. The demyelinated animals that performed aquatic exercises presented a better motor performance than those that did not exercise. Therefore aquatic exercising was beneficial to the motor performance of rats in this experimental model of demyelination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S; Chao, C; Columbia University, NY, NY
2014-06-01
Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less
Koren, Katja; Pišot, Rado; Šimunič, Boštjan
2016-05-01
To determine the effects of a moderate-intensity active workstation on time and error during simulated office work. The aim of the study was to analyse simultaneous work and exercise for non-sedentary office workers. We monitored oxygen uptake, heart rate, sweating stains area, self-perceived effort, typing test time with typing error count and cognitive performance during 30 min of exercise with no cycling or cycling at 40 and 80 W. Compared baseline, we found increased physiological responses at 40 and 80 W, which corresponds to moderate physical activity (PA). Typing time significantly increased by 7.3% (p = 0.002) in C40W and also by 8.9% (p = 0.011) in C80W. Typing error count and cognitive performance were unchanged. Although moderate intensity exercise performed on cycling workstation during simulated office tasks increases working task execution time with, it has moderate effect size; however, it does not increase the error rate. Participants confirmed that such a working design is suitable for achieving the minimum standards for daily PA during work hours. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
NASA Astrophysics Data System (ADS)
Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay
2004-10-01
A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
Propagation of coherent light pulses with PHASE
NASA Astrophysics Data System (ADS)
Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.
2014-09-01
The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.
Sittig, D. F.; Orr, J. A.
1991-01-01
Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm. PMID:1807607
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
NASA Astrophysics Data System (ADS)
Pichardo, Samuel; Moreno-Hernández, Carlos; Drainville, Robert Andrew; Sin, Vivian; Curiel, Laura; Hynynen, Kullervo
2017-09-01
A better understanding of ultrasound transmission through the human skull is fundamental to develop optimal imaging and therapeutic applications. In this study, we present global attenuation values and functions that correlate apparent density calculated from computed tomography scans to shear speed of sound. For this purpose, we used a model for sound propagation based on the viscoelastic wave equation (VWE) assuming isotropic conditions. The model was validated using a series of measurements with plates of different plastic materials and angles of incidence of 0°, 15° and 50°. The optimal functions for transcranial ultrasound propagation were established using the VWE, scan measurements of transcranial propagation with an angle of incidence of 40° and a genetic optimization algorithm. Ten (10) locations over three (3) skulls were used for ultrasound frequencies of 270 kHz and 836 kHz. Results with plastic materials demonstrated that the viscoelastic modeling predicted both longitudinal and shear propagation with an average (±s.d.) error of 9(±7)% of the wavelength in the predicted delay and an error of 6.7(±5)% in the estimation of transmitted power. Using the new optimal functions of speed of sound and global attenuation for the human skull, the proposed model predicted the transcranial ultrasound transmission for a frequency of 270 kHz with an expected error in the predicted delay of 5(±2.7)% of the wavelength. The sound propagation model predicted accurately the sound propagation regardless of either shear or longitudinal sound transmission dominated. For 836 kHz, the model predicted accurately in average with an error in the predicted delay of 17(±16)% of the wavelength. Results indicated the importance of the specificity of the information at a voxel level to better understand ultrasound transmission through the skull. These results and new model will be very valuable tools for the future development of transcranial applications of ultrasound therapy and imaging.
Does Exercise Improve Cognitive Performance? A Conservative Message from Lord's Paradox.
Liu, Sicong; Lebeau, Jean-Charles; Tenenbaum, Gershon
2016-01-01
Although extant meta-analyses support the notion that exercise results in cognitive performance enhancement, methodology shortcomings are noted among primary evidence. The present study examined relevant randomized controlled trials (RCTs) published in the past 20 years (1996-2015) for methodological concerns arise from Lord's paradox. Our analysis revealed that RCTs supporting the positive effect of exercise on cognition are likely to include Type I Error(s). This result can be attributed to the use of gain score analysis on pretest-posttest data as well as the presence of control group superiority over the exercise group on baseline cognitive measures. To improve accuracy of causal inferences in this area, analysis of covariance on pretest-posttest data is recommended under the assumption of group equivalence. Important experimental procedures are discussed to maintain group equivalence.
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Continued investigation of potential application of Omega navigation to civil aviation
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1978-01-01
Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.
Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study.
Chung, Heewon; Ko, Hoon; Thap, Tharoeun; Jeong, Changwon; Noh, Se-Eung; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone's built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone's built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient's HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis.
Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study
Chung, Heewon; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone’s built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone’s built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient’s HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis. PMID:27551969
Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media
Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.
2009-01-01
Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.
The Relationship of Exercise to Fatigue and Quality of Life in Women With Breast Cancer
1999-08-01
exercise study during the first 3 cycles of chemotherapy. Weight change, body mass index, anorexia, nausea, caloric expenditure during exercise and... caloric expenditure increased, fatigue declined. However, the effects of exercise intensity were only significant for the least fatigue (p=.0402) and...Exercise dose and fatigue 25 Table 7. Least squares means and standard errors for four measures of daily fatigue by caloric expenditure . Caloric
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Auger, Ludovic
2003-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Applying Metrological Techniques to Satellite Fundamental Climate Data Records
NASA Astrophysics Data System (ADS)
Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.
2018-02-01
Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.
Land mobile satellite propagation measurements in Japan using ETS-V satellite
NASA Technical Reports Server (NTRS)
Obara, Noriaki; Tanaka, Kenji; Yamamoto, Shin-Ichi; Wakana, Hiromitsu
1993-01-01
Propagation characteristics of land mobile satellite communications channels have been investigated actively in recent years. Information of propagation characteristics associated with multipath fading and shadowing is required to design commercial land mobile satellite communications systems, including protocol and error correction method. CRL (Communications Research Laboratory) has carried out propagation measurements using the Engineering Test Satellite-V (ETS-V) at L band (1.5 GHz) through main roads in Japan by a medium gain antenna with an autotracking capability. This paper presents the propagation statistics obtained in this campaign.
Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; M.A. Pope; R.M. Ferrer
2010-10-01
The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.
Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.
Strength tests for elite rowers: low- or high-repetition?
Lawton, Trent W; Cronin, John B; McGuigan, Michael R
2014-01-01
The purpose of this project was to evaluate the utility of low- and high-repetition maximum (RM) strength tests used to assess rowers. Twenty elite heavyweight males (age 23.7 ± 4.0 years) performed four tests (5 RM, 30 RM, 60 RM and 120 RM) using leg press and seated arm pulling exercise on a dynamometer. Each test was repeated on two further occasions; 3 and 7 days from the initial trial. Per cent typical error (within-participant variation) and intraclass correlation coefficients (ICCs) were calculated using log-transformed repeated-measures data. High-repetition tests (30 RM, 60 RM and 120 RM), involving seated arm pulling exercise are not recommended to be included in an assessment battery, as they had unsatisfactory measurement precision (per cent typical error > 5% or ICC < 0.9). Conversely, low-repetition tests (5 RM) involving leg press and seated arm pulling exercises could be used to assess elite rowers (per cent typical error ≤ 5% and ICC ≥ 0.9); however, only 5 RM leg pressing met criteria (per cent typical error = 2.7%, ICC = 0.98) for research involving small samples (n = 20). In summary, low-repetition 5 RM strength testing offers greater utility as assessments of rowers, as they can be used to measure upper- and lower-body strength; however, only the leg press exercise is recommended for research involving small squads of elite rowers.
Does Exercise Improve Cognitive Performance? A Conservative Message from Lord's Paradox
Liu, Sicong; Lebeau, Jean-Charles; Tenenbaum, Gershon
2016-01-01
Although extant meta-analyses support the notion that exercise results in cognitive performance enhancement, methodology shortcomings are noted among primary evidence. The present study examined relevant randomized controlled trials (RCTs) published in the past 20 years (1996–2015) for methodological concerns arise from Lord's paradox. Our analysis revealed that RCTs supporting the positive effect of exercise on cognition are likely to include Type I Error(s). This result can be attributed to the use of gain score analysis on pretest-posttest data as well as the presence of control group superiority over the exercise group on baseline cognitive measures. To improve accuracy of causal inferences in this area, analysis of covariance on pretest-posttest data is recommended under the assumption of group equivalence. Important experimental procedures are discussed to maintain group equivalence. PMID:27493637
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
NASA Technical Reports Server (NTRS)
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
Using special functions to model the propagation of airborne diseases
NASA Astrophysics Data System (ADS)
Bolaños, Daniela
2014-06-01
Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
Corrigendum and addendum. Modeling weakly nonlinear acoustic wave propagation
Christov, Ivan; Christov, C. I.; Jordan, P. M.
2014-12-18
This article presents errors, corrections, and additions to the research outlined in the following citation: Christov, I., Christov, C. I., & Jordan, P. M. (2007). Modeling weakly nonlinear acoustic wave propagation. The Quarterly Journal of Mechanics and Applied Mathematics, 60(4), 473-495.
Consistency and convergence for numerical radiation conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.
Accounting for uncertainty in DNA sequencing data.
O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J
2015-02-01
Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Lin, Yin-Liang; Karduna, Andrew
2016-10-01
Proprioception is essential for shoulder neuromuscular control and shoulder stability. Exercise of the rotator cuff and scapulothoracic muscles is an important part of shoulder rehabilitation. The purpose of this study was to investigate the effect of rotator cuff and scapulothoracic muscle exercises on shoulder joint position sense. Thirty-six healthy subjects were recruited and randomly assigned into either a control or training group. The subjects in the training group received closed-chain and open-chain exercises focusing on rotator cuff and scapulothoracic muscles for four weeks. Shoulder joint position sense errors in elevation, including the humerothoracic, glenohumeral and scapulothoracic joints, was measured. After four weeks of exercise training, strength increased overall in the training group, which demonstrated the effect of exercise on the muscular system. However, the changes in shoulder joint position sense errors in any individual joint of the subjects in the training group were not different from those of the control subjects. Therefore, exercises specifically targeting individual muscles with low intensity may not be sufficient to improve shoulder joint position sense in healthy subjects. Future work is needed to further investigate which types of exercise are more effective in improving joint position sense, and the mechanisms associated with those changes. Copyright © 2016 Elsevier B.V. All rights reserved.
Acute effect of vigorous aerobic exercise on the inhibitory control in adolescents
Browne, Rodrigo Alberto Vieira; Costa, Eduardo Caldas; Sales, Marcelo Magalhães; Fonteles, André Igor; de Moraes, José Fernando Vila Nova; Barros, Jônatas de França
2016-01-01
Abstract Objective: To assess the acute effect of vigorous aerobic exercise on the inhibitory control in adolescents. Methods: Controlled, randomized study with crossover design. Twenty pubertal individuals underwent two 30-minute sessions: (1) aerobic exercise session performed between 65% and 75% of heart rate reserve, divided into 5 min of warm-up, 20 min at the target intensity and 5 min of cool down; and (2) control session watching a cartoon. Before and after the sessions, the computerized Stroop test-Testinpacs™ was applied to evaluate the inhibitory control. Reaction time (ms) and errors (n) were recorded. Results: The control session reaction time showed no significant difference. On the other hand, the reaction time of the exercise session decreased after the intervention (p<0.001). The number of errors made at the exercise session were lower than in the control session (p=0.011). Additionally, there was a positive association between reaction time (Δ) of the exercise session and age (r 2=0.404, p=0.003). Conclusions: Vigorous aerobic exercise seems to promote acute improvement in the inhibitory control in adolescents. The effect of exercise on the inhibitory control performance was associated with age, showing that it was reduced at older age ranges. PMID:26564328
Lin, Yin-Liang; Karduna, Andrew
2016-01-01
Proprioception is essential for shoulder neuromuscular control and shoulder stability. Exercise of the rotator cuff and scapulothoracic muscles is an important part of shoulder rehabilitation. The purpose of this study was to investigate the effect of rotator cuff and scapulothoracic muscle exercises on shoulder joint position sense. Thirty-six healthy subjects were recruited and randomly assigned into either a control or training group. The subjects in the training group received closed-chain and open-chain exercises focusing on rotator cuff and scapulothoracic muscles for four weeks. Shoulder joint position sense errors in elevation, including the humerothoracic, glenohumeral and scapulothoracic joints, was measured. After four weeks of exercise training, strength increased overall in the training group, which demonstrated the effect of exercise on the muscular system. However, the changes in shoulder joint position sense errors in any individual joint of the subjects in the training group were not different from those of the control subjects. Therefore, exercises specifically targeting individual muscles with low intensity may not be sufficient to improve shoulder joint position sense in healthy subjects. Future work is needed to further investigate which types of exercise are more effective in improving joint position sense, and the mechanisms associated with those changes. PMID:27475714
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.;
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Measurements of aperture averaging on bit-error-rate
NASA Astrophysics Data System (ADS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-08-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Prediction of transmission distortion for wireless video communication: analysis.
Chen, Zhifeng; Wu, Dapeng
2012-03-01
Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.
The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation
NASA Astrophysics Data System (ADS)
Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell
2018-02-01
The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.
Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.
Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki
2014-11-01
Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.
Concurrent remote entanglement with quantum error correction against photon losses
NASA Astrophysics Data System (ADS)
Roy, Ananda; Stone, A. Douglas; Jiang, Liang
2016-09-01
Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.
Yuan, Shen-fang; Jin, Xin; Qiu, Lei; Huang, Hong-mei
2015-03-01
In order to improve the security of aircraft repaired structures, a method of crack propagation monitoring in repaired structures is put forward basing on characteristics of Fiber Bragg Grating (FBG) reflecting spectra in this article. With the cyclic loading effecting on repaired structure, cracks propagate, while non-uniform strain field appears nearby the tip of crack which leads to the FBG sensors' reflecting spectra deformations. The crack propagating can be monitored by extracting the characteristics of FBG sensors' reflecting spectral deformations. A finite element model (FEM) of the specimen is established. Meanwhile, the distributions of strains which are under the action of cracks of different angles and lengths are obtained. The characteristics, such as main peak wavelength shift, area of reflecting spectra, second and third peak value and so on, are extracted from the FBGs' reflecting spectral which are calculated by transfer matrix algorithm. An artificial neural network is built to act as the model between the characteristics of the reflecting spectral and the propagation of crack. As a result, the crack propagation of repaired structures is monitored accurately and the error of crack length is less than 0.5 mm, the error of crack angle is less than 5 degree. The accurately monitoring problem of crack propagation of repaired structures is solved by taking use of this method. It has important significance in aircrafts safety improvement and maintenance cost reducing.
Lee, Jin H; Howell, David R; Meehan, William P; Iverson, Grant L; Gardner, Andrew J
2017-09-01
The Sport Concussion Assessment Tool-Third Edition (SCAT3) is currently considered the standard sideline assessment for concussions. In-game exercise, however, may affect SCAT3 performance and the diagnosis of concussions. To examine the influence of exercise on SCAT3 performance in professional male athletes. Controlled laboratory study. We examined the SCAT3 performance of 82 professional male athletes under 2 conditions: at rest and after exercise. Athletes reported significantly fewer total symptoms (mean, 1.0 ± 1.5 vs 1.6 ± 2.3 total symptoms, respectively; P = .008; Cohen d = 0.34), committed significantly fewer errors on the modified Balance Error Scoring System (mean, 3.5 ± 3.5 vs 4.6 ± 4.1 errors, respectively; P = .017; d = 0.31), and required significantly less time to complete the tandem gait test (mean, 9.5 ± 1.4 vs 9.9 ± 1.7 seconds, respectively; P = .02; d = 0.30) during the at-rest condition compared with the postexercise condition. The interpretation of in-game (sideline) SCAT3 results should consider the effects of postexercise fatigue levels on an athlete's performance, particularly if preseason baseline data have been collected when the athlete was well rested. Exercise appears to affect symptom burden and physical abilities, such as balance and tandem gait, more so than the cognitive components of the SCAT3.
Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.
ERIC Educational Resources Information Center
Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki
2000-01-01
Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)
NASA Astrophysics Data System (ADS)
Lechtenberg, Travis; McLaughlin, Craig A.; Locke, Travis; Krishna, Dhaval Mysore
2013-01-01
paper examines atmospheric density estimated using precision orbit ephemerides (POE) from the CHAMP and GRACE satellites during short periods of greater atmospheric density variability. The results of the calibration of CHAMP densities derived using POEs with those derived using accelerometers are examined for three different types of density perturbations, [traveling atmospheric disturbances (TADs), geomagnetic cusp phenomena, and midnight density maxima] in order to determine the temporal resolution of POE solutions. In addition, the densities are compared to High-Accuracy Satellite Drag Model (HASDM) densities to compare temporal resolution for both types of corrections. The resolution for these models of thermospheric density was found to be inadequate to sufficiently characterize the short-term density variations examined here. Also examined in this paper is the effect of differing density estimation schemes by propagating an initial orbit state forward in time and examining induced errors. The propagated POE-derived densities incurred errors of a smaller magnitude than the empirical models and errors on the same scale or better than those incurred using the HASDM model.
Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M
2009-11-01
The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Accounting for apparent deviations between calorimetric and van't Hoff enthalpies.
Kantonen, Samuel A; Henriksen, Niel M; Gilson, Michael K
2018-03-01
In theory, binding enthalpies directly obtained from calorimetry (such as ITC) and the temperature dependence of the binding free energy (van't Hoff method) should agree. However, previous studies have often found them to be discrepant. Experimental binding enthalpies (both calorimetric and van't Hoff) are obtained for two host-guest pairs using ITC, and the discrepancy between the two enthalpies is examined. Modeling of artificial ITC data is also used to examine how different sources of error propagate to both types of binding enthalpies. For the host-guest pairs examined here, good agreement, to within about 0.4kcal/mol, is obtained between the two enthalpies. Additionally, using artificial data, we find that different sources of error propagate to either enthalpy uniquely, with concentration error and heat error propagating primarily to calorimetric and van't Hoff enthalpies, respectively. With modern calorimeters, good agreement between van't Hoff and calorimetric enthalpies should be achievable, barring issues due to non-ideality or unanticipated measurement pathologies. Indeed, disagreement between the two can serve as a flag for error-prone datasets. A review of the underlying theory supports the expectation that these two quantities should be in agreement. We address and arguably resolve long-standing questions regarding the relationship between calorimetric and van't Hoff enthalpies. In addition, we show that comparison of these two quantities can be used as an internal consistency check of a calorimetry study. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal strategies for throwing accurately
2017-01-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed–accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error. PMID:28484641
Optimal strategies for throwing accurately
NASA Astrophysics Data System (ADS)
Venkadesan, M.; Mahadevan, L.
2017-04-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation
Jing, Yun; Tao, Molei; Clement, Greg T.
2011-01-01
A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985
Verifying Parentage and Confirming Identity in Blackberry with a Fingerprinting Set
USDA-ARS?s Scientific Manuscript database
Parentage and identity confirmation is an important aspect of clonally propagated crops outcrossing. Potential errors resulting misidentification include off-type pollination events, labeling errors, or sports of clones. DNA fingerprinting sets are an excellent solution to quickly identify off-type ...
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
Exercise-Induced Hypoalgesia After Isometric Wall Squat Exercise: A Test-Retest Reliabilty Study.
Vaegter, Henrik Bjarke; Lyng, Kristian Damgaard; Yttereng, Fredrik Wannebo; Christensen, Mads Holst; Sørensen, Mathias Brandhøj; Graven-Nielsen, Thomas
2018-05-19
Isometric exercises decrease pressure pain sensitivity in exercising and nonexercising muscles known as exercise-induced hypoalgesia (EIH). No studies have assessed the test-retest reliability of EIH after isometric exercise. This study investigated the EIH on pressure pain thresholds (PPTs) after an isometric wall squat exercise. The relative and absolute test-retest reliability of the PPT as a test stimulus and the EIH response in exercising and nonexercising muscles were calculated. In two identical sessions, PPTs of the thigh and shoulder were assessed before and after three minutes of quiet rest and three minutes of wall squat exercise, respectively, in 35 healthy subjects. The relative test-retest reliability of PPT and EIH was determined using analysis of variance models, Person's r, and intraclass correlations (ICCs). The absolute test-retest reliability of EIH was determined based on PPT standard error of measurements and Cohen's kappa for agreement between sessions. Squat increased PPTs of exercising and nonexercising muscles by 16.8% ± 16.9% and 6.7% ± 12.9%, respectively (P < 0.001), with no significant differences between sessions. PPTs within and between sessions showed moderately strong correlations (r ≥ 0.74) and excellent (ICC ≥ 0.84) within-session (rest) and between-session test-retest reliability. EIH responses of exercising and nonexercising muscles showed no systematic errors between sessions; however, the relative test-retest reliability was low (ICCs = 0.03-0.43), and agreement in EIH responders and nonresponders between sessions was not significant (κ < 0.13, P > 0.43). A wall squat exercise increased PPTs compared with quiet rest; however, the relative and absolute reliability of the EIH response was poor. Future research is warranted to investigate the reliability of EIH in clinical pain populations.
Armstrong, Craig; Samuel, Jake; Yarlett, Andrew; Cooper, Stephen-Mark; Stembridge, Mike; Stöhr, Eric J.
2016-01-01
Increased left ventricular (LV) twist and untwisting rate (LV twist mechanics) are essential responses of the heart to exercise. However, previously a large variability in LV twist mechanics during exercise has been observed, which complicates the interpretation of results. This study aimed to determine some of the physiological sources of variability in LV twist mechanics during exercise. Sixteen healthy males (age: 22 ± 4 years, V˙O2peak: 45.5 ± 6.9 ml∙kg-1∙min-1, range of individual anaerobic threshold (IAT): 32–69% of V˙O2peak) were assessed at rest and during exercise at: i) the same relative exercise intensity, 40%peak, ii) at 2% above IAT, and, iii) at 40%peak with hypoxia (40%peak+HYP). LV volumes were not significantly different between exercise conditions (P > 0.05). However, the mean margin of error of LV twist was significantly lower (F2,47 = 2.08, P < 0.05) during 40%peak compared with IAT (3.0 vs. 4.1 degrees). Despite the same workload and similar LV volumes, hypoxia increased LV twist and untwisting rate (P < 0.05), but the mean margin of error remained similar to that during 40%peak (3.2 degrees, P > 0.05). Overall, LV twist mechanics were linearly related to rate pressure product. During exercise, the intra-individual variability of LV twist mechanics is smaller at the same relative exercise intensity compared with IAT. However, the absolute magnitude (degrees) of LV twist mechanics appears to be associated with the prevailing rate pressure product. Exercise tests that evaluate LV twist mechanics should be standardised by relative exercise intensity and rate pressure product be taken into account when interpreting results. PMID:27100099
Armstrong, Craig; Samuel, Jake; Yarlett, Andrew; Cooper, Stephen-Mark; Stembridge, Mike; Stöhr, Eric J
2016-01-01
Increased left ventricular (LV) twist and untwisting rate (LV twist mechanics) are essential responses of the heart to exercise. However, previously a large variability in LV twist mechanics during exercise has been observed, which complicates the interpretation of results. This study aimed to determine some of the physiological sources of variability in LV twist mechanics during exercise. Sixteen healthy males (age: 22 ± 4 years, [Formula: see text]O2peak: 45.5 ± 6.9 ml∙kg-1∙min-1, range of individual anaerobic threshold (IAT): 32-69% of [Formula: see text]O2peak) were assessed at rest and during exercise at: i) the same relative exercise intensity, 40%peak, ii) at 2% above IAT, and, iii) at 40%peak with hypoxia (40%peak+HYP). LV volumes were not significantly different between exercise conditions (P > 0.05). However, the mean margin of error of LV twist was significantly lower (F2,47 = 2.08, P < 0.05) during 40%peak compared with IAT (3.0 vs. 4.1 degrees). Despite the same workload and similar LV volumes, hypoxia increased LV twist and untwisting rate (P < 0.05), but the mean margin of error remained similar to that during 40%peak (3.2 degrees, P > 0.05). Overall, LV twist mechanics were linearly related to rate pressure product. During exercise, the intra-individual variability of LV twist mechanics is smaller at the same relative exercise intensity compared with IAT. However, the absolute magnitude (degrees) of LV twist mechanics appears to be associated with the prevailing rate pressure product. Exercise tests that evaluate LV twist mechanics should be standardised by relative exercise intensity and rate pressure product be taken into account when interpreting results.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Space-Borne Laser Altimeter Geolocation Error Analysis
NASA Astrophysics Data System (ADS)
Wang, Y.; Fang, J.; Ai, Y.
2018-05-01
This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.
Teaching concepts of clinical measurement variation to medical students.
Hodder, R A; Longfield, J N; Cruess, D F; Horton, J A
1982-09-01
An exercise in clinical epidemiology was developed for medical students to demonstrate the process and limitations of scientific measurement using models that simulate common clinical experiences. All scales of measurement (nominal, ordinal and interval) were used to illustrate concepts of intra- and interobserver variation, systematic error, recording error, and procedural error. In a laboratory, students a) determined blood pressures on six videotaped subjects, b) graded sugar content of unknown solutions from 0 to 4+ using Clinitest tablets, c) measured papules that simulated PPD reactions, d) measured heart and kidney size on X-rays and, e) described a model skin lesion (melanoma). Traditionally, measurement variation is taught in biostatistics or epidemiology courses using previously collected data. Use of these models enables students to produce their own data using measurements commonly employed by the clinician. The exercise provided material for a meaningful discussion of the implications of measurement error in clinical decision-making.
Lee, Jin H.; Howell, David R.; Meehan, William P.; Iverson, Grant L.; Gardner, Andrew J.
2017-01-01
Background: The Sport Concussion Assessment Tool–Third Edition (SCAT3) is currently considered the standard sideline assessment for concussions. In-game exercise, however, may affect SCAT3 performance and the diagnosis of concussions. Purpose: To examine the influence of exercise on SCAT3 performance in professional male athletes. Study Design: Controlled laboratory study. Methods: We examined the SCAT3 performance of 82 professional male athletes under 2 conditions: at rest and after exercise. Results: Athletes reported significantly fewer total symptoms (mean, 1.0 ± 1.5 vs 1.6 ± 2.3 total symptoms, respectively; P = .008; Cohen d = 0.34), committed significantly fewer errors on the modified Balance Error Scoring System (mean, 3.5 ± 3.5 vs 4.6 ± 4.1 errors, respectively; P = .017; d = 0.31), and required significantly less time to complete the tandem gait test (mean, 9.5 ± 1.4 vs 9.9 ± 1.7 seconds, respectively; P = .02; d = 0.30) during the at-rest condition compared with the postexercise condition. Conclusion: The interpretation of in-game (sideline) SCAT3 results should consider the effects of postexercise fatigue levels on an athlete’s performance, particularly if preseason baseline data have been collected when the athlete was well rested. Clinical Relevance: Exercise appears to affect symptom burden and physical abilities, such as balance and tandem gait, more so than the cognitive components of the SCAT3. PMID:28944251
Turbine flowmeter vs. Fleisch pneumotachometer: a comparative study for exercise testing.
Yeh, M P; Adams, T D; Gardner, R M; Yanowitz, F G
1987-09-01
The purpose of this study was to investigate the characteristics of a newly developed turbine flowmeter (Alpha Technologies, model VMM-2) for use in an exercise testing system by comparing its measurement of expiratory flow (VE), O2 uptake (VO2), and CO2 output (VCO2) with the Fleisch pneumotachometer. An IBM PC/AT-based breath-by-breath system was developed, with turbine flowmeter and dual-Fleisch pneumotachometers connected in series. A normal subject was tested twice at rest, 100-W, and 175-W of exercise. Expired gas of 24-32 breaths was collected in a Douglas bag. VE was within 4% accuracy for both flowmeter systems. The Fleisch pneumotachometer system had 5% accuracy for VO2 and VCO2 at rest and exercise. The turbine flowmeter system had up to 20% error for VO2 and VCO2 at rest. Errors decreased as work load increased. Visual observations of the flow curves revealed the turbine signal always lagged the Fleisch signal at the beginning of inspiration or expiration. At the end of inspiration or expiration, the turbine signal continued after the Fleisch signal had returned to zero. The "lag-before-start" and "spin-after-stop" effects of the turbine flowmeter resulted in larger than acceptable error for the VO2 and VCO2 measurements at low flow rates.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
DiStefano, Lindsay J; Padua, Darin A; DiStefano, Michael J; Marshall, Stephen W
2009-03-01
Anterior cruciate ligament (ACL) injury prevention programs show promising results with changing movement; however, little information exists regarding whether a program designed for an individual's movements may be effective or how baseline movements may affect outcomes. A program designed to change specific movements would be more effective than a "one-size-fits-all" program. Greatest improvement would be observed among individuals with the most baseline error. Subjects of different ages and sexes respond similarly. Randomized controlled trial; Level of evidence, 1. One hundred seventy-three youth soccer players from 27 teams were randomly assigned to a generalized or stratified program. Subjects were videotaped during jump-landing trials before and after the program and were assessed using the Landing Error Scoring System (LESS), which is a valid clinical movement analysis tool. A high LESS score indicates more errors. Generalized players performed the same exercises, while the stratified players performed exercises to correct their initial movement errors. Change scores were compared between groups of varying baseline errors, ages, sexes, and programs. Subjects with the highest baseline LESS score improved the most (95% CI, -3.4 to -2.0). High school subjects (95% CI, -1.7 to -0.98) improved their technique more than pre-high school subjects (95% CI, -1.0 to -0.4). There was no difference between the programs or sexes. Players with the greatest amount of movement errors experienced the most improvement. A program's effectiveness may be enhanced if this population is targeted.
Bias Reduction and Filter Convergence for Long Range Stereo
NASA Technical Reports Server (NTRS)
Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav
2005-01-01
We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model. Copyright © 2012 Elsevier Ltd. All rights reserved.
Metcalfe, A W S; MacIntosh, B J; Scavone, A; Ou, X; Korczak, D; Goldstein, B I
2016-01-01
Executive dysfunction is common during and between mood episodes in bipolar disorder (BD), causing social and functional impairment. This study investigated the effect of acute exercise on adolescents with BD and healthy control subjects (HC) to test for positive or negative consequences on neural response during an executive task. Fifty adolescents (mean age 16.54±1.47 years, 56% female, 30 with BD) completed an attention and response inhibition task before and after 20 min of recumbent cycling at ~70% of age-predicted maximum heart rate. 3 T functional magnetic resonance imaging data were analyzed in a whole brain voxel-wise analysis and as regions of interest (ROI), examining Go and NoGo response events. In the whole brain analysis of Go trials, exercise had larger effect in BD vs HC throughout ventral prefrontal cortex, amygdala and hippocampus; the profile of these effects was of greater disengagement after exercise. Pre-exercise ROI analysis confirmed this 'deficit in deactivation' for BDs in rostral ACC and found an activation deficit on NoGo errors in accumbens. Pre-exercise accumbens NoGo error activity correlated with depression symptoms and Go activity with mania symptoms; no correlations were present after exercise. Performance was matched to controls and results survived a series of covariate analyses. This study provides evidence that acute aerobic exercise transiently changes neural response during an executive task among adolescents with BD, and that pre-exercise relationships between symptoms and neural response are absent after exercise. Acute aerobic exercise constitutes a biological probe that may provide insights regarding pathophysiology and treatment of BD. PMID:27187236
Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status.
Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L
2016-03-01
Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 min. Subsequently, participants estimated the number of calories they expended through exercise and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. The mean difference between estimated and measured calories in exercise and food did not differ within or between groups after moderate exercise. After vigorous exercise, OW-noWL overestimated energy expenditure by 72% and overestimated the calories in their food by 37% (P < 0.05). OW-noWL also significantly overestimated exercise energy expenditure compared with all other groups (P < 0.05) and significantly overestimated calories in food compared with both WL groups (P < 0.05). However, among all groups, there was a considerable range of overestimation and underestimation (-280 to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. There was a wide range of underestimation and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss.
Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status
Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L
2016-01-01
Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988
Metcalfe, A W S; MacIntosh, B J; Scavone, A; Ou, X; Korczak, D; Goldstein, B I
2016-05-17
Executive dysfunction is common during and between mood episodes in bipolar disorder (BD), causing social and functional impairment. This study investigated the effect of acute exercise on adolescents with BD and healthy control subjects (HC) to test for positive or negative consequences on neural response during an executive task. Fifty adolescents (mean age 16.54±1.47 years, 56% female, 30 with BD) completed an attention and response inhibition task before and after 20 min of recumbent cycling at ~70% of age-predicted maximum heart rate. 3 T functional magnetic resonance imaging data were analyzed in a whole brain voxel-wise analysis and as regions of interest (ROI), examining Go and NoGo response events. In the whole brain analysis of Go trials, exercise had larger effect in BD vs HC throughout ventral prefrontal cortex, amygdala and hippocampus; the profile of these effects was of greater disengagement after exercise. Pre-exercise ROI analysis confirmed this 'deficit in deactivation' for BDs in rostral ACC and found an activation deficit on NoGo errors in accumbens. Pre-exercise accumbens NoGo error activity correlated with depression symptoms and Go activity with mania symptoms; no correlations were present after exercise. Performance was matched to controls and results survived a series of covariate analyses. This study provides evidence that acute aerobic exercise transiently changes neural response during an executive task among adolescents with BD, and that pre-exercise relationships between symptoms and neural response are absent after exercise. Acute aerobic exercise constitutes a biological probe that may provide insights regarding pathophysiology and treatment of BD.
Su, Junjing; Manisty, Charlotte; Simonsen, Ulf; Howard, Luke S; Parker, Kim H; Hughes, Alun D
2017-10-15
Wave travel plays an important role in cardiovascular physiology. However, many aspects of pulmonary arterial wave behaviour remain unclear. Wave intensity and reservoir-excess pressure analyses were applied in the pulmonary artery in subjects with and without pulmonary hypertension during spontaneous respiration and dynamic stress tests. Arterial wave energy decreased during expiration and Valsalva manoeuvre due to decreased ventricular preload. Wave energy also decreased during handgrip exercise due to increased heart rate. In pulmonary hypertension patients, the asymptotic pressure at which the microvascular flow ceases, the reservoir pressure related to arterial compliance and the excess pressure caused by waves increased. The reservoir and excess pressures decreased during Valsalva manoeuvre but remained unchanged during handgrip exercise. This study provides insights into the influence of pulmonary vascular disease, spontaneous respiration and dynamic stress tests on pulmonary artery wave propagation and reservoir function. Detailed haemodynamic analysis may provide novel insights into the pulmonary circulation. Therefore, wave intensity and reservoir-excess pressure analyses were applied in the pulmonary artery to characterize changes in wave propagation and reservoir function during spontaneous respiration and dynamic stress tests. Right heart catheterization was performed using a pressure and Doppler flow sensor tipped guidewire to obtain simultaneous pressure and flow velocity measurements in the pulmonary artery in control subjects and patients with pulmonary arterial hypertension (PAH) at rest. In controls, recordings were also obtained during Valsalva manoeuvre and handgrip exercise. The asymptotic pressure at which the flow through the microcirculation ceases, the reservoir pressure related to arterial compliance and the excess pressure caused by arterial waves increased in PAH patients compared to controls. The systolic and diastolic rate constants also increased, while the diastolic time constant decreased. The forward compression wave energy decreased by ∼8% in controls and ∼6% in PAH patients during expiration compared to inspiration, while the wave speed remained unchanged throughout the respiratory cycle. Wave energy decreased during Valsalva manoeuvre (by ∼45%) and handgrip exercise (by ∼27%) with unaffected wave speed. Moreover, the reservoir and excess pressures decreased during Valsalva manoeuvre but remained unaltered during handgrip exercise. In conclusion, reservoir-excess pressure analysis applied to the pulmonary artery revealed distinctive differences between controls and PAH patients. Variations in the ventricular preload and afterload influence pulmonary arterial wave propagation as demonstrated by changes in wave energy during spontaneous respiration and dynamic stress tests. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.
Mutual optical intensity propagation through non-ideal mirrors
Meng, Xiangyu; Shi, Xianbo; Wang, Yong; ...
2017-08-18
The mutual optical intensity (MOI) model is extended to include the propagation of partially coherent radiation through non-ideal mirrors. The propagation of the MOI from the incident to the exit plane of the mirror is realised by local ray tracing. The effects of figure errors can be expressed as phase shifts obtained by either the phase projection approach or the direct path length method. Using the MOI model, the effects of figure errors are studied for diffraction-limited cases using elliptical cylinder mirrors. Figure errors with low spatial frequencies can vary the intensity distribution, redistribute the local coherence function and distortmore » the wavefront, but have no effect on the global degree of coherence. The MOI model is benchmarked againstHYBRIDand the multi-electronSynchrotron Radiation Workshop(SRW) code. The results show that the MOI model gives accurate results under different coherence conditions of the beam. Other than intensity profiles, the MOI model can also provide the wavefront and the local coherence function at any location along the beamline. The capability of tuning the trade-off between accuracy and efficiency makes the MOI model an ideal tool for beamline design and optimization.« less
Smallfield, Stacy; Heckenlaible, Cindy
The purpose of this systematic review was to describe the evidence for the effectiveness of interventions designed to establish, modify, and maintain occupations for adults with Alzheimer's disease (AD) and related neurocognitive disorders. Titles and abstracts of 2,597 articles were reviewed, of which 256 were retrieved for full review and 52 met inclusion criteria. U.S. Preventive Services Task Force levels of certainty and grade definitions were used to describe the strength of evidence. Articles were categorized into five themes: occupation-based, sleep, cognitive, physical exercise, and multicomponent interventions. Strong evidence supports the benefits of occupation-based interventions, physical exercise, and error-reduction learning. Occupational therapy practitioners should integrate daily occupations, physical exercise, and error-reduction techniques into the daily routine of adults with AD to enhance occupational performance and delay functional decline. Future research should focus on establishing consensus on types and dosage of exercise and cognitive interventions. Copyright © 2017 by the American Occupational Therapy Association, Inc.
Linear Elastic Waves - Series: Cambridge Texts in Applied Mathematics (No. 26)
NASA Astrophysics Data System (ADS)
Harris, John G.
2001-10-01
Wave propagation and scattering are among the most fundamental processes that we use to comprehend the world around us. While these processes are often very complex, one way to begin to understand them is to study wave propagation in the linear approximation. This is a book describing such propagation using, as a context, the equations of elasticity. Two unifying themes are used. The first is that an understanding of plane wave interactions is fundamental to understanding more complex wave interactions. The second is that waves are best understood in an asymptotic approximation where they are free of the complications of their excitation and are governed primarily by their propagation environments. The topics covered include reflection, refraction, the propagation of interfacial waves, integral representations, radiation and diffraction, and propagation in closed and open waveguides. Linear Elastic Waves is an advanced level textbook directed at applied mathematicians, seismologists, and engineers. Aimed at beginning graduate students Includes examples and exercises Has application in a wide range of disciplines
Linguistic Knowledge and Reasoning for Error Diagnosis and Feedback Generation.
ERIC Educational Resources Information Center
Delmonte, Rodolfo
2003-01-01
Presents four sets of natural language processing-based exercises for which error correction and feedback are produced by means of a rich database in which linguistic information is encoded either at the lexical or the grammatical level. (Author/VWL)
Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler
2016-12-01
An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.
Combined Acoustic Propagation in Eastpac Region (Exercise CAPER): Initial Acoustic Analysis
1978-06-01
the possibility of out- of -plane reflections off a second seamount when shadowed by the seamount chosen for crossing . Fieberling Tablemount then became...Hanna, then of the Acoustic Environ- mental Support Detachment (AESD), had a number of reservations and suggestions as to thle exercise plan. The...distance to Track A. The calculations of Fig. 4 were based on the pre- dicted sound-speed profile and on seamount cross sections taken at 1.8-km
NASA Astrophysics Data System (ADS)
Li, H. J.; Wei, F. S.; Feng, X. S.; Xie, Y. Q.
2008-09-01
This paper investigates methods to improve the predictions of Shock Arrival Time (SAT) of the original Shock Propagation Model (SPM). According to the classical blast wave theory adopted in the SPM, the shock propagating speed is determined by the total energy of the original explosion together with the background solar wind speed. Noting that there exists an intrinsic limit to the transit times computed by the SPM predictions for a specified ambient solar wind, we present a statistical analysis on the forecasting capability of the SPM using this intrinsic property. Two facts about SPM are found: (1) the error in shock energy estimation is not the only cause of the prediction errors and we should not expect that the accuracy of SPM to be improved drastically by an exact shock energy input; and (2) there are systematic differences in prediction results both for the strong shocks propagating into a slow ambient solar wind and for the weak shocks into a fast medium. Statistical analyses indicate the physical details of shock propagation and thus clearly point out directions of the future improvement of the SPM. A simple modification is presented here, which shows that there is room for improvement of SPM and thus that the original SPM is worthy of further development.
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Temporal scaling in information propagation
NASA Astrophysics Data System (ADS)
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Tam, Nicoladie D
2013-01-01
This study aims to identify the acute effects of physical exercise on specific cognitive functions immediately following an increase in cardiovascular activity. Stair-climbing exercise is used to increase the cardiovascular output of human subjects. The color-naming Stroop Test was used to identify the cognitive improvements in executive function with respect to processing speed and error rate. The study compared the Stroop results before and immediately after exercise and before and after nonexercise, as a control. The results show that there is a significant increase in processing speed and a reduction in errors immediately after less than 30 min of aerobic exercise. The improvements are greater for the incongruent than for the congruent color tests. This suggests that physical exercise induces a better performance in a task that requires resolving conflict (or interference) than a task that does not. There is no significant improvement for the nonexercise control trials. This demonstrates that an increase in cardiovascular activity has significant acute effects on improving the executive function that requires conflict resolution (for the incongruent color tests) immediately following aerobic exercise more than similar executive functions that do not require conflict resolution or involve the attention-inhibition process (for the congruent color tests).
Lee, C Matthew; Gorelick, Mark; Mendoza, Albert
2011-12-01
The purpose of this study was to examine the accuracy of the ePulse Personal Fitness Assistant, a forearm-worn device that provides measures of heart rate and estimates energy expenditure. Forty-six participants engaged in 4-minute periods of standing, 2.0 mph walking, 3.5 mph walking, 4.5 mph jogging, and 6.0 mph running. Heart rate and energy expenditure were simultaneously recorded at 60-second intervals using the ePulse, an electrocardiogram (EKG), and indirect calorimetry. The heart rates obtained from the ePulse were highly correlated (intraclass correlation coefficients [ICCs] ≥0.85) with those from the EKG during all conditions. The typical errors progressively increased with increasing exercise intensity but were <5 bpm only during rest and 2.0 mph. Energy expenditure from the ePulse was poorly correlated with indirect calorimetry (ICCs: 0.01-0.36) and the typical errors for energy expenditure ranged from 0.69-2.97 kcal · min(-1), progressively increasing with exercise intensity. These data suggest that the ePulse Personal Fitness Assistant is a valid device for monitoring heart rate at rest and low-intensity exercise, but becomes less accurate as exercise intensity increases. However, it does not appear to be a valid device to estimate energy expenditure during exercise.
Deterministic Modeling of the High Temperature Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
A support-operator method for 3-D rupture dynamics
NASA Astrophysics Data System (ADS)
Ely, Geoffrey P.; Day, Steven M.; Minster, Jean-Bernard
2009-06-01
We present a numerical method to simulate spontaneous shear crack propagation within a heterogeneous, 3-D, viscoelastic medium. Wave motions are computed on a logically rectangular hexahedral mesh, using the generalized finite-difference method of Support Operators (SOM). This approach enables modelling of non-planar surfaces and non-planar fault ruptures. Our implementation, the Support Operator Rupture Dynamics (SORD) code, is highly scalable, enabling large-scale, multiprocessors calculations. The fault surface is modelled by coupled double nodes, where rupture occurs as dictated by the local stress conditions and a frictional failure law. The method successfully performs test problems developed for the Southern California Earthquake Center (SCEC)/U.S. Geological Survey (USGS) dynamic earthquake rupture code validation exercise, showing good agreement with semi-analytical boundary integral method results. We undertake further dynamic rupture tests to quantify numerical errors introduced by shear deformations to the hexahedral mesh. We generate a family of meshes distorted by simple shearing, in the along-strike direction, up to a maximum of 73°. For SCEC/USGS validation problem number 3, grid-induced errors increase with mesh shear angle, with the logarithm of error approximately proportional to angle over the range tested. At 73°, rms misfits are about 10 per cent for peak slip rate, and 0.5 per cent for both rupture time and total slip, indicating that the method (which, up to now, we have applied mainly to near-vertical strike-slip faulting) is also capable of handling geometries appropriate to low-angle surface-rupturing thrust earthquakes. Additionally, we demonstrate non-planar rupture effects, by modifying the test geometry to include, respectively, cylindrical curvature and sharp kinks.
CME Arrival-time Validation of Real-time WSA-ENLIL+Cone Simulations at the CCMC/SWRC
NASA Astrophysics Data System (ADS)
Wold, A. M.; Mays, M. L.; Taktakishvili, A.; Jian, L.; Odstrcil, D.; MacNeice, P. J.
2016-12-01
The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations worldwide to model CME propagation, as such it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC/Space Weather Research Center (SWRC). The SWRC is a CCMC sub-team that provides space weather services to NASA robotic mission operators and science campaigns, and also prototypes new forecasting models and techniques. CCMC/SWRC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in-situ ICME shock observations near Earth (ACE, Wind), STEREO-A and B for simulations completed between March 2010 - July 2016 (over 1500 runs). We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we compute the bias, RMSE, and average absolute CME arrival time error, and the dependence of these errors on CME input parameters. We compare the predicted geomagnetic storm strength (Kp index) to the CME arrival time error for Earth-directed CMEs. The predicted Kp index is computed using the WSA-ENLIL+Cone plasma parameters at Earth with a modified Newell et al. (2007) coupling function. We also explore the impact of the multi-spacecraft observations on the CME parameters used initialize the model by comparing model validation results before and after the STEREO-B communication loss (since September 2014) and STEREO-A side-lobe operations (August 2014-December 2015). This model validation exercise has significance for future space weather mission planning such as L5 missions.
QUANTIFYING UNCERTAINTY IN NET PRIMARY PRODUCTION MEASUREMENTS
Net primary production (NPP, e.g., g m-2 yr-1), a key ecosystem attribute, is estimated from a combination of other variables, e.g. standing crop biomass at several points in time, each of which is subject to errors in their measurement. These errors propagate as the variables a...
Observer Biases in the Classroom.
ERIC Educational Resources Information Center
Kite, Mary E.
1991-01-01
Presents three student exercises that demonstrate common perceptual errors described in social psychological literature: actor-observer effect, false consensus bias, and priming effects. Describes methods to be followed and gives terms, sentences, and a story to be used in the exercises. Suggests discussion of the bases and impact of such…
Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2
2016-06-01
outgoing error propagation condition declara- tions (see Section 5.2.2). The declaration consists of a source error behavior state, possibly anno - tated...2012. [Feiler 2013] Feiler, P. H.; Goodenough, J . B.; Gurfinkel, A.; Weinstock, C. B.; & Wrage, L. Four Pillars for Improving the Quality of...May 2002. [Paige 2009] Paige, Richard F.; Rose, Louis M.; Ge, Xiaocheng; Kolovos, Dimitrios S.; & Brooke, Phillip J . FPTC: Automated Safety
TIME SIGNALS, * SYNCHRONIZATION (ELECTRONICS)), NETWORKS, FREQUENCY, STANDARDS, RADIO SIGNALS, ERRORS, VERY LOW FREQUENCY, PROPAGATION, ACCURACY, ATOMIC CLOCKS, CESIUM, RADIO STATIONS, NAVAL SHORE FACILITIES
An improved empirical model for diversity gain on Earth-space propagation paths
NASA Technical Reports Server (NTRS)
Hodge, D. B.
1981-01-01
An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.
A Recovery-Oriented Approach to Dependable Services: Repairing Past Errors with System-Wide Undo
2003-12-01
54 4.5.3 Handling propagating paradoxes: the squash interface . . . . . . . . . . . . . . . . . . . 54 4.6 Discussion...84 6.3.3 Compensating for paradoxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.3.4 Squashing propagating...the service and comparing the behavior of the replicas to detect and squash misbehaving replicas. While on paper Byzantine fault tolerance may seem to
Pole of rotating analysis of present-day Juan de Fuca plate motion
NASA Technical Reports Server (NTRS)
Nishimura, C.; Wilson, D. S.; Hey, R. N.
1984-01-01
Convergence rates between the Juan de Fuca and North American plates are calculated by means of their relative, present-day pole of rotation. A method of calculating the propagation of errors in addition to the instantaneous poles of rotation is also formulated and applied to determine the Euler pole for Pacific-Juan de Fuca. This pole is vectorially added to previously published poles for North America-Pacific and 'hot spot'-Pacific to obtain North America-Juan de Fuca and 'hot spot'-Juan de Fuca, respectively. The errors associated with these resultant poles are determined by propagating the errors of the two summed angular velocity vectors. Under the assumption that hot spots are fixed with respect to a mantle reference frame, the average absolute velocity of the Juan de Puca plate is computed at approximately 15 mm/yr, thereby making it the slowest-moving of the oceanic plates.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
ACTS propagation concerns, issues, and plans
NASA Technical Reports Server (NTRS)
Davarian, Faramaz
1989-01-01
ACTS counters fading by resource sharing between the users. It provides a large margin only for those terminals which are at risk by unfavorable atmospheric conditions. ACTS, as an experimental satellite, provides a 5 dB clear weather margin and 10 dB additional margin via rate reduction and encoding. For the uplink, this margin may be increased by exercising uplink power control. Some of the challenges faced by the radiowave propagation community are listed. The issue of needs for the satellite are listed, both general and specific.
Drinking policies and exercise-associated hyponatraemia: is anyone still promoting overdrinking?
Beltrami, F G; Hew-Butler, T; Noakes, T D
2008-10-01
The purpose of this review is to describe the evolution of hydration research and advice on drinking during exercise from published scientific papers, books and non-scientific material (advertisements and magazine contents) and detail how erroneous advice is likely propagated throughout the global sports medicine community. Hydration advice from sports-linked entities, the scientific community, exercise physiology textbooks and non-scientific sources was analysed historically and compared with the most recent scientific evidence. Drinking policies during exercise have changed substantially throughout history. Since the mid-1990s, however, there has been an increase in the promotion of overdrinking by athletes. While the scientific community is slowly moving away from "blanket" hydration advice in which one form of advice fits all and towards more modest, individualised, hydration guidelines in which thirst is recognised as the best physiological indicator of each subject's fluid needs during exercise, marketing departments of the global sports drink industry continue to promote overdrinking.
Blanchet, Sophie; Richards, Carol L; Leblond, Jean; Olivier, Charles; Maltais, Désirée B
2016-06-01
This study, a quasi-experimental, one-group pretest-post-test design, evaluated the effects on cognitive functioning and cardiorespiratory fitness of 8-week interventions (aerobic exercise alone and aerobic exercise and cognitive training combined) in patients with chronic stroke and cognitive impairment living in the community (participants: n=14, 61.93±9.90 years old, 51.50±38.22 months after stroke, n=7 per intervention group). Cognitive functions and cardiorespiratory fitness were evaluated before and after intervention, and at a 3-month follow-up visit (episodic memory: revised-Hopkins Verbal Learning Test; working memory: Brown-Peterson paradigm; attention omission and commission errors: Continuous Performance Test; cardiorespiratory fitness: peak oxygen uptake during a symptom-limited, graded exercise test performed on a semirecumbent ergometer). Friedman's two-way analysis of variance by ranks evaluated differences in score distributions related to time (for the two groups combined). Post-hoc testing was adjusted for multiple comparisons. Compared with before the intervention, there was a significant reduction in attention errors immediately following the intervention (omission errors: 14.6±21.5 vs. 8±13.9, P=0.01; commission errors: 16.4±6.3 vs. 10.9±7.2, P=0.04), and in part at follow-up (omission errors on follow-up: 3.4±4.3, P=0.03; commission errors on follow-up: 13.2±7.6, P=0.42). These results suggest that attention may improve in chronic stroke survivors with cognitive impairment following short-term training that includes an aerobic component, without a change in cardiorespiratory fitness. Randomized-controlled studies are required to confirm these findings.
The effect of an acute bout of exercise on executive function among individuals with schizophrenia.
Subramaniapillai, Mehala; Tremblay, Luc; Grassmann, Viviane; Remington, Gary; Faulkner, Guy
2016-12-30
Cognitive impairment represents a significant source of disability among individuals with schizophrenia. Therefore, the aim of this study was to investigate, at a proof-of-concept level, whether one single bout of exercise can improve executive function among these individuals. In this within-participant, counterbalanced experiment, participants with schizophrenia (n=36) completed two sessions (cycling at moderate-intensity and passively sitting) for 20min, with a one-week washout period between the two sessions. Participants completed the Wisconsin Card Sorting Test (WCST) before and after each session to measure changes in executive function. The inclusion of both sessions completed by each participant in the analyses revealed a significant carryover effect. Consequently, only the WCST scores from the first session completed by each participant was analyzed. There was a significant time by session interaction effect for non-perseverative errors. Post-hoc Tukey's HSD contrasts revealed a significant reduction in non-perseverative errors in the exercise group that was of moderate-to-large effect. Furthermore, there was also a moderate between-group difference at post-testing. Therefore, an acute bout of exercise can improve performance on an executive function task in individuals with schizophrenia. Specifically, the reduction in non-perseverative errors on the WCST may reflect improved attention, inhibition and overall working memory. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
32 CFR 724.806 - Decisional issues.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the exercise of discretion on the issue of equity in the applicant's case. (ii) If a reason is based... should exercise its equitable powers to change the discharge on the basis of the alleged error. If it..., specific circumstances surrounding the offense, number of offenses, lack of mitigating circumstances, or...
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
Measurement configuration optimization for dynamic metrology using Stokes polarimetry
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan
2018-05-01
As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Image reduction pipeline for the detection of variable sources in highly crowded fields
NASA Astrophysics Data System (ADS)
Gössl, C. A.; Riffeser, A.
2002-01-01
We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
The Importance of Semi-Major Axis Knowledge in the Determination of Near-Circular Orbits
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Schiesser, Emil R.
1998-01-01
Modem orbit determination has mostly been accomplished using Cartesian coordinates. This usage has carried over in recent years to the use of GPS for satellite orbit determination. The unprecedented positioning accuracy of GPS has tended to focus attention more on the system's capability to locate the spacecraft's location at a particular epoch than on its accuracy in determination of the orbit, per se. As is well-known, the latter depends on a coordinated knowledge of position, velocity, and the correlation between their errors. Failure to determine a properly coordinated position/velocity state vector at a given epoch can lead to an epoch state that does not propagate well, and/or may not be usable for the execution of orbit adjustment maneuvers. For the quite common case of near-circular orbits, the degree to which position and velocity estimates are properly coordinated is largely captured by the error in semi-major axis (SMA) they jointly produce. Figure 1 depicts the relationships among radius error, speed error, and their correlation which exist for a typical low altitude Earth orbit. Two familiar consequences are the relationship Figure 1 shows are the following: (1) downrange position error grows at the per orbit rate of 3(pi) times the SMA error; (2) a velocity change imparted to the orbit will have an error of (pi) divided by the orbit period times the SMA error. A less familiar consequence occurs in the problem of initializing the covariance matrix for a sequential orbit determination filter. An initial covariance consistent with orbital dynamics should be used if the covariance is to propagate well. Properly accounting for the SMA error of the initial state in the construction of the initial covariance accomplishes half of this objective, by specifying the partition of the covariance corresponding to down-track position and radial velocity errors. The remainder of the in-plane covariance partition may be specified in terms of the flight path angle error of the initial state. Figure 2 illustrates the effect of properly and not properly initializing a covariance. This figure was produced by propagating the covariance shown on the plot, without process noise, in a circular low Earth orbit whose period is 5828.5 seconds. The upper subplot, in which the proper relationships among position, velocity, and their correlation has been used, shows overall error growth, in terms of the standard deviations of the inertial position coordinates, of about half of the lower subplot, whose initial covariance was based on other considerations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.
The statistical fluctuation study of quantum key distribution in means of uncertainty principle
NASA Astrophysics Data System (ADS)
Liu, Dunwei; An, Huiyao; Zhang, Xiaoyu; Shi, Xuemei
2018-03-01
Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman-Lo-Lutkenhaus-Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon's length is N = 10^{11}, the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle's 127 km. So our study is in line with established theory, but we make it more realistic.
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students
ERIC Educational Resources Information Center
Muzangwa, Jonatan; Chifamba, Peter
2012-01-01
This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…
Pricing Employee Stock Options (ESOs) with Random Lattice
NASA Astrophysics Data System (ADS)
Chendra, E.; Chin, L.; Sukmana, A.
2018-04-01
Employee Stock Options (ESOs) are stock options granted by companies to their employees. Unlike standard options that can be traded by typical institutional or individual investors, employees cannot sell or transfer their ESOs to other investors. The sale restrictions may induce the ESO’s holder to exercise them earlier. In much cited paper, Hull and White propose a binomial lattice in valuing ESOs which assumes that employees will exercise voluntarily their ESOs if the stock price reaches a horizontal psychological barrier. Due to nonlinearity errors, the numerical pricing results oscillate significantly so they may lead to large pricing errors. In this paper, we use the random lattice method to price the Hull-White ESOs model. This method can reduce the nonlinearity error by aligning a layer of nodes of the random lattice with a psychological barrier.
NASA Astrophysics Data System (ADS)
Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.
2005-08-01
MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
46 CFR 520.14 - Special permission.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the Commission, in its discretion and for good cause shown, to permit increases or decreases in rates... its discretion and for good cause shown, permit departures from the requirements of this part. (b) Clerical errors. Typographical and/or clerical errors constitute good cause for the exercise of special...
46 CFR 520.14 - Special permission.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the Commission, in its discretion and for good cause shown, to permit increases or decreases in rates... its discretion and for good cause shown, permit departures from the requirements of this part. (b) Clerical errors. Typographical and/or clerical errors constitute good cause for the exercise of special...
22 CFR 34.18 - Waivers of indebtedness.
Code of Federal Regulations, 2011 CFR
2011-04-01
... known through the exercise of due diligence that an error existed but failed to take corrective action... elapsed between the erroneous payment and discovery of the error and notification of the employee; (D... to duty because of disability (supported by an acceptable medical certificate); and (D) Whether...
Rigorous covariance propagation of geoid errors to geodetic MDT estimates
NASA Astrophysics Data System (ADS)
Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.
2012-04-01
The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Scene Text Recognition using Similarity and a Lexicon with Sparse Belief Propagation
Weinman, Jerod J.; Learned-Miller, Erik; Hanson, Allen R.
2010-01-01
Scene text recognition (STR) is the recognition of text anywhere in the environment, such as signs and store fronts. Relative to document recognition, it is challenging because of font variability, minimal language context, and uncontrolled conditions. Much information available to solve this problem is frequently ignored or used sequentially. Similarity between character images is often overlooked as useful information. Because of language priors, a recognizer may assign different labels to identical characters. Directly comparing characters to each other, rather than only a model, helps ensure that similar instances receive the same label. Lexicons improve recognition accuracy but are used post hoc. We introduce a probabilistic model for STR that integrates similarity, language properties, and lexical decision. Inference is accelerated with sparse belief propagation, a bottom-up method for shortening messages by reducing the dependency between weakly supported hypotheses. By fusing information sources in one model, we eliminate unrecoverable errors that result from sequential processing, improving accuracy. In experimental results recognizing text from images of signs in outdoor scenes, incorporating similarity reduces character recognition error by 19%, the lexicon reduces word recognition error by 35%, and sparse belief propagation reduces the lexicon words considered by 99.9% with a 12X speedup and no loss in accuracy. PMID:19696446
Implementation of neural network for color properties of polycarbonates
NASA Astrophysics Data System (ADS)
Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.
2014-05-01
In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
Is the deleterious effect of cryotherapy on proprioception mitigated by exercise?
Ribeiro, F; Moreira, S; Neto, J; Oliveira, J
2013-05-01
This study aimed to examine the acute effects of cryotherapy on knee position sense and to determine the time period necessary to normalize joint position sense when exercising after cryotherapy. 12 subjects visited the laboratory twice, once for cryotherapy followed by 30 min of exercise on a cycloergometer and once for cryotherapy followed by 30 min of rest. Sessions were randomly determined and separated by 48 h. Cryotherapy was applied in the form of ice bag, filled with 1 kg of crushed ice, for 20 min. Knee position sense was measured at baseline, after cryotherapy and every 5 min after cryotherapy removal until a total of 30 min. The main effect of cryotherapy was significant showing an increase in absolute (F7,154=43.76, p<0.001) and relative (F7,154=7.97, p<0.001) errors after cryotherapy. The intervention after cryotherapy (rest vs. exercise) revealed a significant main effect only for absolute error (F7,154=4.05, p<0.001), i.e., when subjects exercised after cryotherapy, the proprioceptive acuity reached the baseline values faster (10 min vs. 15 min). Our results indicated that the deleterious effect of cryotherapy on proprioception is mitigated by low intensity exercise, being the time necessary to normalize knee position sense reduced from 15 to 10 min. © Georg Thieme Verlag KG Stuttgart · New York.
The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.
ERIC Educational Resources Information Center
O'Reilly, James E.
1986-01-01
Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
NASA Astrophysics Data System (ADS)
Kemp, Z. D. C.
2018-04-01
Determining the phase of a wave from intensity measurements has many applications in fields such as electron microscopy, visible light optics, and medical imaging. Propagation based phase retrieval, where the phase is obtained from defocused images, has shown significant promise. There are, however, limitations in the accuracy of the retrieved phase arising from such methods. Sources of error include shot noise, image misalignment, and diffraction artifacts. We explore the use of artificial neural networks (ANNs) to improve the accuracy of propagation based phase retrieval algorithms applied to simulated intensity measurements. We employ a phase retrieval algorithm based on the transport-of-intensity equation to obtain the phase from simulated micrographs of procedurally generated specimens. We then train an ANN with pairs of retrieved and exact phases, and use the trained ANN to process a test set of retrieved phase maps. The total error in the phase is significantly reduced using this method. We also discuss a variety of potential extensions to this work.
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.
Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay
NASA Technical Reports Server (NTRS)
Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.
1991-01-01
An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.
Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy
Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.
2011-01-01
Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm∕s with a shortening speed of 20.4 to 27.1 μm∕s on average and a contraction frequency of 7.4 to 21.6 contractions∕min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second. PMID:21361700
Collis, Jon M; Frank, Scott D; Metzler, Adam M; Preston, Kimberly S
2016-05-01
Sound propagation predictions for ice-covered ocean acoustic environments do not match observational data: received levels in nature are less than expected, suggesting that the effects of the ice are substantial. Effects due to elasticity in overlying ice can be significant enough that low-shear approximations, such as effective complex density treatments, may not be appropriate. Building on recent elastic seafloor modeling developments, a range-dependent parabolic equation solution that treats the ice as an elastic medium is presented. The solution is benchmarked against a derived elastic normal mode solution for range-independent underwater acoustic propagation. Results from both solutions accurately predict plate flexural modes that propagate in the ice layer, as well as Scholte interface waves that propagate at the boundary between the water and the seafloor. The parabolic equation solution is used to model a scenario with range-dependent ice thickness and a water sound speed profile similar to those observed during the 2009 Ice Exercise (ICEX) in the Beaufort Sea.
van Dyk, N; Witvrouw, E; Bahr, R
2018-04-25
In elite sport, the use of strength testing to establish muscle function and performance is common. Traditionally, isokinetic strength tests have been used, measuring torque during concentric and eccentric muscle action. A device that measures eccentric hamstring muscle strength while performing the Nordic hamstring exercise is now also frequently used. The study aimed to investigate the variability of isokinetic muscle strength over time, for example, between seasons, and the relationship between isokinetic testing and the new Nordic hamstring exercise device. All teams (n = 18) eligible to compete in the premier football league in Qatar underwent a comprehensive strength assessment during their periodic health evaluation at Aspetar Orthopaedic and Sports Medicine Hospital in Qatar. Isokinetic strength was investigated for measurement error, and correlated to Nordic hamstring exercise strength. Of the 529 players included, 288 players had repeated tests with 1/2 seasons between test occasions. Variability (measurement error) between test occasions was substantial, as demonstrated by the measurement error (approximately 25 Nm, 15%), whether separated by 1 or 2 seasons. Considering hamstring injuries, the same pattern was observed among injured (n = 60) and uninjured (n = 228) players. A poor correlation (r = .35) was observed between peak isokinetic hamstring eccentric torque and Nordic hamstring exercise peak force. The strength imbalance between limbs calculated for both test modes was not correlated (r = .037). There is substantial intraindividual variability in all isokinetic test measures, whether separated by 1 or 2 seasons, irrespective of injury. Also, eccentric hamstring strength and limb-to-limb imbalance were poorly correlated between the isokinetic and Nordic hamstring exercise tests. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
2008-09-30
propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Rodgers, E. B.
1977-01-01
An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.
Verb Errors of Bilingual and Monolingual Basic Writers
ERIC Educational Resources Information Center
Griswold, Olga
2017-01-01
This study analyzed the grammatical control of verbs exercised by 145 monolingual English and Generation 1.5 bilingual developmental writers in narrative essays using quantitative and qualitative methods. Generation 1.5 students made more errors than their monolingual peers in each category investigated, albeit in only 2 categories was the…
Amplify Errors to Minimize Them
ERIC Educational Resources Information Center
Stewart, Maria Shine
2009-01-01
In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…
Cross Section Sensitivity and Propagated Errors in HZE Exposures
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.
2005-01-01
It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
Lower Extremity Landing Biomechanics in Both Sexes After a Functional Exercise Protocol
Wesley, Caroline A.; Aronson, Patricia A.; Docherty, Carrie L.
2015-01-01
Context Sex differences in landing biomechanics play a role in increased rates of anterior cruciate ligament (ACL) injuries in female athletes. Exercising to various states of fatigue may negatively affect landing mechanics, resulting in a higher injury risk, but research is inconclusive regarding sex differences in response to fatigue. Objective To use the Landing Error Scoring System (LESS), a valid clinical movement-analysis tool, to determine the effects of exercise on the landing biomechanics of males and females. Design Cross-sectional study. Setting University laboratory. Patients or Other Participants Thirty-six (18 men, 18 women) healthy college-aged athletes (members of varsity, club, or intramural teams) with no history of ACL injury or prior participation in an ACL injury-prevention program. Intervention(s) Participants were videotaped performing 3 jump-landing trials before and after performance of a functional, sportlike exercise protocol consisting of repetitive sprinting, jumping, and cutting tasks. Main Outcome Measure(s) Landing technique was evaluated using the LESS. A higher LESS score indicates more errors. The mean of the 3 LESS scores in each condition (pre-exercise and postexercise) was used for statistical analysis. Results Women scored higher on the LESS (6.3 ± 1.9) than men (5.0 ± 2.3) regardless of time (P = .04). Postexercise scores (6.3 ± 2.1) were higher than preexercise scores (5.0 ± 2.1) for both sexes (P = .01), but women were not affected to a greater degree than men (P = .62). Conclusions As evidenced by their higher LESS scores, females demonstrated more errors in landing technique than males, which may contribute to their increased rate of ACL injury. Both sexes displayed poor technique after the exercise protocol, which may indicate that participants experience a higher risk of ACL injury in the presence of fatigue. PMID:26285090
The Trojan Lifetime Champions Health Survey: development, validity, and reliability.
Sorenson, Shawn C; Romano, Russell; Scholefield, Robin M; Schroeder, E Todd; Azen, Stanley P; Salem, George J
2015-04-01
Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Descriptive laboratory study. A large National Collegiate Athletic Association Division I university. A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations.
NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.
Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen
2017-04-01
We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
The use of propagation path corrections to improve regional seismic event location in western China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steck, L.K.; Cogbill, A.H.; Velasco, A.A.
1999-03-01
In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. Formore » individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.« less
NASA Astrophysics Data System (ADS)
Gao, X.; Li, T.; Zhang, X.; Geng, X.
2018-04-01
In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.
Radio propagation through solar and other extraterrestrial ionized media
NASA Technical Reports Server (NTRS)
Smith, E. K.; Edelson, R. E.
1980-01-01
The present S- and X-band communications needs in deep space are addressed to illustrate the aspects which are affected by propagation through extraterrestrial plasmas. The magnitude, critical threshold, and frequency dependence of some eight propagation effects for an S-band propagation path passing within 4 solar radii of the Sun are described. The theory and observation of propagation in extraterrestrial plasmas are discussed and the various plasma states along a near solar propagation path are illustrated. Classical magnetoionic theory (cold anisotropic plasma) is examined for its applicability to the path in question. The characteristics of the plasma states found along the path are summarized and the errors in some of the standard approximations are indicated. Models of extraterrestrial plasmas are included. Modeling the electron density in the solar corona and solar wind, is emphasized but some cursory information on the terrestrial planets plus Jupiters is included.
2017-01-01
Purpose/Background Shoulder proprioception is essential in the activities of daily living as well as in sports. Acute muscle fatigue is believed to cause a deterioration of proprioception, increasing the risk of injury. The purpose of this study was to evaluate if fatigue of the shoulder external rotators during eccentric versus concentric activity affects shoulder joint proprioception as determined by active reproduction of position. Study design Quasi-experimental trial. Methods Twenty-two healthy subjects with no recent history of shoulder pathology were randomly allocated to either a concentric or an eccentric exercise group for fatiguing the shoulder external rotators. Proprioception was assessed before and after the fatiguing protocol using an isokinetic dynamometer, by measuring active reproduction of position at 30 ° of shoulder external rotation, reported as absolute angular error. The fatiguing protocol consisted of sets of fifteen consecutive external rotator muscle contractions in either the concentric or eccentric action. The subjects were exercised until there was a 30% decline from the peak torque of the subjects’ maximal voluntary contraction over three consecutive muscle contractions. Results A one-way analysis of variance test revealed no statistical difference in absolute angular error (p > 0.05) between concentric and eccentric groups. Moreover, no statistical difference (p > 0.05) was found in absolute angular error between pre- and post-fatigue in either group. Conclusions Eccentric exercise does not seem to acutely affect shoulder proprioception to a larger extent than concentric exercise. Level of evidence 2b PMID:28515976
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Mohammadi, Farshid; Azma, Kamran; Naseh, Iman; Emadifard, Reza; Etemadi, Yasaman
2013-01-01
The high incidence of lower limb injuries associated with physical exercises in military conscripts suggests that fatigue may be a risk factor for injuries. Researchers have hypothesized that lower limb injuries may be related to altered ankle and knee joint position sense (JPS) due to fatigue. To evaluate if military exercises could alter JPS and to examine the possible relation of JPS to future lower extremity injuries in military service. Cohort study. Laboratory. A total of 50 male conscripts (age = 21.4 ± 2.3 years, height = 174.5 ± 6.4 cm, mass = 73.1 ± 6.3 kg) from a unique military base were recruited randomly. main outcome measure(s): Participants performed 8 weeks of physical activities at the beginning of a military course. In the first part of the study, we instructed participants to recognize predetermined positions before and after military exercises so we could examine the effects of military exercise on JPS. The averages of the absolute error and the variable error of 3 trials were recorded. We collected data on the frequency of lower extremity injuries over 8 weeks. Next, the participants were divided into 2 groups: injured and uninjured. Separate 2 × 2 × 2 (group-by-time-by-joint) mixed-model analyses of variance were used to determine main effects and interactions of these factors for each JPS measure. In the second part of the study, we examined whether the effects of fatigue on JPS were related to the development of injury during an 8-week training program. We calculated Hedges effect sizes for JPS changes postexercise in each group and compared change scores between groups. We found group-by-time interactions for all JPS variables (F range = 2.86-4.05, P < .01). All participants showed increases in JPS errors postexercise (P < .01), but the injured group had greater changes for all the variables (P < .01). Military conscripts who sustained lower extremity injuries during an 8-week military exercise program had greater loss of JPS acuity than conscripts who did not sustain injuries. The changes in JPS found after 1 bout of exercise may have predictive ability for future musculoskeletal injuries.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Deviation diagnosis and analysis of hull flat block assembly based on a state space model
NASA Astrophysics Data System (ADS)
Zhang, Zhiying; Dai, Yinfang; Li, Zhen
2012-09-01
Dimensional control is one of the most important challenges in the shipbuilding industry. In order to predict assembly dimensional variation in hull flat block construction, a variation stream model based on state space was presented in this paper which can be further applied to accuracy control in shipbuilding. Part accumulative error, locating error, and welding deformation were taken into consideration in this model, and variation propagation mechanisms and the accumulative rule in the assembly process were analyzed. Then, a model was developed to describe the variation propagation throughout the assembly process. Finally, an example of flat block construction from an actual shipyard was given. The result shows that this method is effective and useful.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
McArdle Disease and Exercise Physiology.
Kitaoka, Yu
2014-02-25
McArdle disease (glycogen storage disease Type V; MD) is a metabolic myopathy caused by a deficiency in muscle glycogen phosphorylase. Since muscle glycogen is an important fuel for muscle during exercise, this inborn error of metabolism provides a model for understanding the role of glycogen in muscle function and the compensatory adaptations that occur in response to impaired glycogenolysis. Patients with MD have exercise intolerance with symptoms including premature fatigue, myalgia, and/or muscle cramps. Despite this, MD patients are able to perform prolonged exercise as a result of the "second wind" phenomenon, owing to the improved delivery of extra-muscular fuels during exercise. The present review will cover what this disease can teach us about exercise physiology, and particularly focuses on the compensatory pathways for energy delivery to muscle in the absence of glycogenolysis.
Memoir of the Long Range Acoustic Propagation Program (LRAPP)
2011-04-01
75 Pacific Primarily Fleet exercise 21 Church Opal Sep–Oct 75 N of Hawaii 22 Fixed-Mobile Exercise ? 76 Pacific 23 CHURCH STROKE I Jun–Jul 77 NE of...Sea Dan Ramsdale, PI 66 ICEX 90 1990 N of Barrow AK 67 ICEX-90 1990 68 OUTPOST AREA 90 1990 North Pole 69 AREA 91 Ice Camps Opal , Crystal, Ruby 1991...AREA 93 1993 N of Greenland 75 OUTPOST AREA 94 1994 76 BLAKE TEST Atlantic 77 BOTTOM INTERACTION SE Pacific 78 CHURCH OPAL NE Pacific 79 CHURCH STROKE
NASA Astrophysics Data System (ADS)
Roman, D. R.; Smith, D. A.
2017-12-01
In 2022, the National Geodetic Survey will replace all three NAD 83 reference frames with four new terrestrial reference frames. Each frame will be named after a tectonic plate (North American, Pacific, Caribbean and Mariana) and each will be related to the IGS frame through three Euler Pole parameters (EPPs). This talk will focus on three main areas of error propagation when defining coordinates in these four frames. Those areas are (1) use of the small angle approximation to relate true rotation about an Euler Pole to small rotations about three Cartesian axes (2) The current state of the art in determining the Euler Poles of these four plates and (3) the combination of both IGS Cartesian coordinate uncertainties and EPP uncertainties into coordinate uncertainties in the four new frames. Discussion will also include recent efforts at improving the Euler Poles for these frames and expected dates when errors in the EPPs will cause an unacceptable level of uncertainty in the four new terrestrial reference frames.
Motl, Robert W; Fernhall, Bo
2012-03-01
To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
An Effectiveness Index and Profile for Instructional Media.
ERIC Educational Resources Information Center
Bond, Jack H.
A scale was developed for judging the relative value of various media in teaching children. Posttest scores were partitioned into several components: error, prior knowledge, guessing, and gain from the learning exercise. By estimating the amounts of prior knowledge, guessing, and error, and then subtracting these from the total score, an index of…
Having Fun with Error Analysis
ERIC Educational Resources Information Center
Siegel, Peter
2007-01-01
We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Kan, Jianquan
2018-04-01
In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.
Numerical study of signal propagation in corrugated coaxial cables
Li, Jichun; Machorro, Eric A.; Shields, Sidney
2017-01-01
Our article focuses on high-fidelity modeling of signal propagation in corrugated coaxial cables. Taking advantage of the axisymmetry, the authors reduce the 3-D problem to a 2-D problem by solving time-dependent Maxwell's equations in cylindrical coordinates.They then develop a nodal discontinuous Galerkin method for solving their model equations. We prove stability and error analysis for the semi-discrete scheme. We we present our numerical results, we demonstrate that our algorithm not only converges as our theoretical analysis predicts, but it is also very effective in solving a variety of signal propagation problems in practical corrugated coaxial cables.
Kevin Schaefer; Christopher R. Schwalm; Chris Williams; M. Altaf Arain; Alan Barr; Jing M. Chen; Kenneth J. Davis; Dimitre Dimitrov; Timothy W. Hilton; David Y. Hollinger; Elyn Humphreys; Benjamin Poulter; Brett M. Raczka; Andrew D. Richardson; Alok Sahoo; Peter Thornton; Rodrigo Vargas; Hans Verbeeck; Ryan Anderson; Ian Baker; T. Andrew Black; Paul Bolstad; Jiquan Chen; Peter S. Curtis; Ankur R. Desai; Michael Dietze; Danilo Dragoni; Christopher Gough; Robert F. Grant; Lianhong Gu; Atul Jain; Chris Kucharik; Beverly Law; Shuguang Liu; Erandathie Lokipitiya; Hank A. Margolis; Roser Matamala; J. Harry McCaughey; Russ Monson; J. William Munger; Walter Oechel; Changhui Peng; David T. Price; Dan Ricciuto; William J. Riley; Nigel Roulet; Hanqin Tian; Christina Tonitto; Margaret Torn; Ensheng Weng; Xiaolu Zhou
2012-01-01
Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States...
An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System
NASA Technical Reports Server (NTRS)
Fuhrmann, Henri D.; Stewart, Eric C.
1996-01-01
Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection
NASA Astrophysics Data System (ADS)
Ju, Kuanyu; Xiong, Hongkai
2014-11-01
To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.
NASA Astrophysics Data System (ADS)
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
40 CFR 403.16 - Upset provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operational error, improperly designed treatment facilities, inadequate treatment facilities, lack of... usual exercise of prosecutorial discretion, Agency enforcement personnel should review any claims that...
NASA Technical Reports Server (NTRS)
Smith, G. A.
1975-01-01
The attitude of a spacecraft is determined by specifying independent parameters which relate the spacecraft axes to an inertial coordinate system. Sensors which measure angles between spin axis and other vectors directed to objects or fields external to the spacecraft are discussed. For the spin-stabilized spacecraft considered, the spin axis is constant over at least an orbit, but separate solutions based on sensor angle measurements are different due to propagation of errors. Sensor-angle solution methods are described which minimize the propagated errors by making use of least squares techniques over many sensor angle measurements and by solving explicitly (in closed form) for the spin axis coordinates. These methods are compared with star observation solutions to determine if satisfactory accuracy is obtained by each method.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Analysis of the Effect of UTI-UTC to High Precision Orbit
NASA Astrophysics Data System (ADS)
Shin, Dongseok; Kwak, Sunghee; Kim, Tag-Gon
1999-12-01
As the spatial resolution of remote sensing satellites becomes higher, very accurate determination of the position of a LEO (Low Earth Orbit) satellite is demanding more than ever. Non-symmetric Earth gravity is the major perturbation force to LEO satellites. Since the orbit propagation is performed in the celestial frame while Earth gravity is defined in the terrestrial frame, it is required to convert the coordinates of the satellite from one to the other accurately. Unless the coordinate conversion between the two frames is performed accurately the orbit propagation calculates incorrect Earth gravitational force at a specific time instant, and hence, causes errors in orbit prediction. The coordinate conversion between the two frames involves precession, nutation, Earth rotation and polar motion. Among these factors, unpredictability and uncertainty of Earth rotation, called UTI-UTC, is the largest error source. In this paper, the effect of UTI-UTC on the accuracy of the LEO propagation is introduced, tested and analzed. Considering the maximum unpredictability of UTI-UTC, 0.9 seconds, the meaningful order of non-spherical Earth harmonic functions is derived.
2013-01-01
Background Traditional Chinese eye exercises of acupoints involve acupoint self-massage. These have been advocated as a compulsory measure to reduce ocular fatigue, as well as to retard the development of myopia, among Chinese school children. This study evaluated the impact of these eye exercises among Chinese urban children. Methods 409 children (195 males, 47.7%), aged 11.1 ± 3.2 (range 6–17) years, from the Beijing Myopia Progression Study (BMPS) were recruited. All had completed the eye exercise questionnaire, the convergence insufficiency symptom survey (CISS), and a cycloplegic autorefraction. Among these, 395 (96.6%) performed the eye exercises of acupoints. Multiple logistic regressions for myopia and multiple linear regressions for the CISS score (after adjusting for age, gender, average parental refractive error, and time spent doing near work and outdoor activity) for the different items of the eye exercises questionnaire were performed. Results Only the univariate odds ratio (95% confidence interval) for “seriousness of attitude” towards performing the eye exercises of acupoints (0.51, 0.33-0.78) showed a protective effect towards myopia. However, none of the odds ratios were significant after adjusting for the confounding factors. The univariate and multiple β coefficients for the CISS score were -2.47 (p = 0.002) and -1.65 (p = 0.039), -3.57 (p = 0.002) and -2.35 (p = 0.042), and -2.40 (p = 0.003) and -2.29 (p = 0.004), for attitude, speed of exercise, and acquaintance with acupoints, respectively, which were all significant. Conclusions The traditional Chinese eye exercises of acupoints appeared to have a modest effect on relieving near vision symptoms among Chinese urban children aged 6 to 17 years. However, no remarkable effect on reducing myopia was observed. PMID:24195652
A Software Package for Neural Network Applications Development
NASA Technical Reports Server (NTRS)
Baran, Robert H.
1993-01-01
Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.
Gleadhill, Sam; Lee, James Bruce; James, Daniel
2016-05-03
This research presented and validated a method of assessing postural changes during resistance exercise using inertial sensors. A simple lifting task was broken down to a series of well-defined tasks, which could be examined and measured in a controlled environment. The purpose of this research was to determine whether timing measures obtained from inertial sensor accelerometer outputs are able to provide accurate, quantifiable information of resistance exercise movement patterns. The aim was to complete a timing measure validation of inertial sensor outputs. Eleven participants completed five repetitions of 15 different deadlift variations. Participants were monitored with inertial sensors and an infrared three dimensional motion capture system. Validation was undertaken using a Will Hopkins Typical Error of the Estimate, with a Pearson׳s correlation and a Bland Altman Limits of Agreement analysis. Statistical validation measured the timing agreement during deadlifts, from inertial sensor outputs and the motion capture system. Timing validation results demonstrated a Pearson׳s correlation of 0.9997, with trivial standardised error (0.026) and standardised bias (0.002). Inertial sensors can now be used in practical settings with as much confidence as motion capture systems, for accelerometer timing measurements of resistance exercise. This research provides foundations for inertial sensors to be applied for qualitative activity recognition of resistance exercise and safe lifting practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less
NASA Astrophysics Data System (ADS)
Karami, Behrouz; Janghorban, Maziar; Li, Li
2018-03-01
We found a proofing error existing in the affiliation of the first and second authors of our article [1], We found a proofing error existing in the affiliation of the first and second authors of our article [1]. The correct affiliation should be "Department of Mechanical Engineering, Marvdasht Branch, Islamic Azad University, Marvdasht, Iran".
The propagation of wind errors through ocean wave hindcasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holthuijsen, L.H.; Booij, N.; Bertotti, L.
1996-08-01
To estimate uncertainties in wave forecast and hindcasts, computations have been carried out for a location in the Mediterranean Sea using three different analyses of one historic wind field. These computations involve a systematic sensitivity analysis and estimated wind field errors. This technique enables a wave modeler to estimate such uncertainties in other forecasts and hindcasts if only one wind analysis is available.
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
Estimation of the center of rotation using wearable magneto-inertial sensors.
Crabolu, M; Pani, D; Raffo, L; Cereatti, A
2016-12-08
Determining the center of rotation (CoR) of joints is fundamental to the field of human movement analysis. CoR can be determined using a magneto-inertial measurement unit (MIMU) using a functional approach requiring a calibration exercise. We systematically investigated the influence of different experimental conditions that can affect precision and accuracy while estimating the CoR, such as (a) angular joint velocity, (b) distance between the MIMU and the CoR, (c) type of the joint motion implemented, (d) amplitude of the angular range of motion, (e) model of the MIMU used for data recording, (f) amplitude of additive noise on inertial signals, and (g) amplitude of the errors in the MIMU orientation. The evaluation process was articulated at three levels: assessment through experiments using a mechanical device, mathematical simulation, and an analytical propagation model of the noise. The results reveal that joint angular velocity significantly impacted CoR identification, and hence, slow joint movement should be avoided. An accurate estimation of the MIMU orientation is also fundamental for accurately subtracting the contribution owing to gravity to obtain the coordinate acceleration. The unit should be preferably attached close to the CoR, but both type and range of motion do not appear to be critical. When the proposed methodology is correctly implemented, error in the CoR estimates is expected to be <3mm (best estimates=2±0.5mm). The findings of the present study foster the need to further investigate this methodology for application in human subjects. Copyright © 2016 Elsevier Ltd. All rights reserved.
Conditions for the optical wireless links bit error ratio determination
NASA Astrophysics Data System (ADS)
Kvíčala, Radek
2017-11-01
To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
Error analysis in inverse scatterometry. I. Modeling.
Al-Assaad, Rayan M; Byrne, Dale M
2007-02-01
Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.
Feasibility of a Caregiver Assisted Exercise Program for Preterm Infants
Gravem, Dana; Lakes, Kimberley; Rich, Julia; Hayes, Gillian; Cooper, Dan; Olshansky, Ellen
2013-01-01
Purpose Mounting evidence shows that low birth weight and prematurity are related to serious health problems in adulthood, including increased body fat, decreased fitness, poor bone mineralization, pulmonary problems, and cardiovascular disease. There is data to suggest that increasing physical activity in preterm infants will have effects on short term muscle mass and fat mass, but we also hypothesized that increasing physical activity early in life can lead to improved health outcomes in adulthood. Because few studies have addressed the augmentation of physical activity in premature babies, the objective of this study was to evaluate the feasibility of whether caregivers (mostly mothers) can learn from nurses and other health care providers to implement a program of assisted infant exercise following discharge. Study Design and Methods Ten caregivers of preterm infants were taught by nurses, along with occupational therapists and other health care providers, to perform assisted infant exercise and instructed to conduct the exercises daily for approximately three weeks. The researchers made home visits and conducted qualitative interviews to understand the caregivers’ (mostly mothers’) experiences with this exercise protocol. Quantitative data included a caregiver’s daily log of the exercises completed to measure adherence as well as videotaped caregiver sessions, which were used to record errors as a measure of proficiency in the exercise technique. Results On average, the caregivers completed a daily log on 92% of the days enrolled in the study and reported performing the exercises on 93% of the days recorded. Caregivers made an average of 1.8 errors on two tests (with a maximum of 23 or 35 items on each, respectively) when demonstrating proficiency in the exercise technique. All caregivers described the exercises as beneficial for their infants, and many reported that these interventions fostered increased bonding with their babies. Nearly all reported feeling “scared” of hurting their babies during the first few days of home exercise, but stated that fears were alleviated by practice in the home and further teaching and learning. Clinical Implications Caregivers were willing and able to do the exercises correctly, and they expressed a belief that the intervention had positive effects on their babies and on caregiver-infant interactions. These findings have important implications for nursing practice because nurses are in key positions to teach and encourage caregivers to practice these exercises with their newborn babies. PMID:23618941
Obesity, Exercise and Orthopedic Disease.
Frye, Christopher W; Shmalberg, Justin W; Wakshlag, Joseph J
2016-09-01
Osteoarthritis is common among aging canine and feline patients. The incidence and severity of clinical lameness are closely correlated to body condition in overweight and obese patients. Excessive adiposity may result in incongruous and excessive mechanical loading that worsens clinical signs in affected patients. Data suggest a potential link between adipokines, obesity-related inflammation, and a worsening of the underlying pathology. Similarly, abnormal physical stress and generalized systemic inflammation propagated by obesity contribute to neurologic signs associated with intervertebral disc disease. Weight loss and exercise are critical to ameliorating the pain and impaired mobility of affected animals. Copyright © 2016 Elsevier Inc. All rights reserved.
The Trojan Lifetime Champions Health Survey: Development, Validity, and Reliability
Sorenson, Shawn C.; Romano, Russell; Scholefield, Robin M.; Schroeder, E. Todd; Azen, Stanley P.; Salem, George J.
2015-01-01
Context Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. Objective To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Design Descriptive laboratory study. Setting A large National Collegiate Athletic Association Division I university. Patients or Other Participants A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Intervention(s) Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Main Outcome Measure(s) Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Results Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. Conclusions These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations. PMID:25611315
NASA Astrophysics Data System (ADS)
Parenti, Ronald R.; Michael, Steven; Roth, Jeffrey M.; Yarnall, Timothy M.
2010-08-01
Over a two-year period beginning in early 2008, MIT Lincoln Laboratory conducted two free-space optical communication experiments designed to test the ability of spatial beam diversity, symbol encoding, and interleaving to reduce the effects of turbulence-induced scintillation. The first of these exercises demonstrated a 2.7 Gb/s link over a ground-level 5.4 km horizontal path. Signal detection was accomplished through the use of four spatially-separated 12 mm apertures that coupled the received light into pre-amplified single-mode fiber detectors. Similar equipment was used in a second experiment performed in the fall of 2009, which demonstrated an error-free air-to-ground link at propagation ranges up to 60 km. In both of these tests power levels at all fiber outputs were sampled at 1 msec intervals, which enabled a high-rate characterization of the received signal fluctuations. The database developed from these experiments encompasses a wide range of propagation geometries and turbulence conditions. This information has subsequently been analyzed in an attempt to correlate estimates of the turbulence profile with measurements of the scintillation index, characteristic fading time constant, scintillation patch size, and the shape parameters of the statistical distributions of the received signals. Significant findings include observations of rapid changes in the scintillation index driven by solar flux variations, consistent similarities in the values of the alpha and beta shape parameters of the gamma-gamma distribution function, and strong evidence of channel reciprocity. This work was sponsored by the Department of Defense, RRCO DDR&E, under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, conclusions and recommendations are those of the authors and are not necessarily endorsed by the United States Government.
Effect of hand-arm exercise on venous blood constituents during leg exercise
NASA Technical Reports Server (NTRS)
Wong, N.; Silver, J. E.; Greenawalt, S.; Kravik, S. E.; Geelen, G.
1985-01-01
Contributions by ancillary hand and arm actions to the changes in blood constituents effected by leg exercises on cycle ergometer were assessed. Static or dynamic hand-arm exercises were added to the leg exercise (50 percent VO2 peak)-only control regimens for the subjects (19-27 yr old men) in the two experimental groups. Antecubital venous blood was analyzed at times 0, 15, and 30 min (T0, T15, and T30) for serum Na(+), K(+), osmolality, albumin, total CA(2+), and glucose; blood hemoglobin, hematocrit, and lactic acid; and change in plasma volume. Only glucose and lactate values were affected by additional arm exercise. Glucose decreased 4 percent at T15 and T30 after static exercise, and by 2 percent at T15 (with no change at T30) after dynamic arm exercise. Conversely, lactic acid increased by 20 percent at T30 after static exercise, and by 14 percent by T15 and 6 percent at T30 after dynamic arm exercise. It is concluded that additional arm movements, performed usually when gripping the handle-bar on the cycle ergometer, could introduce significant errors in measured venous concentrations of glucose and lactate in the leg-exercised subjects.
A Hands-On Exercise Improves Understanding of the Standard Error of the Mean
ERIC Educational Resources Information Center
Ryan, Robert S.
2006-01-01
One of the most difficult concepts for statistics students is the standard error of the mean. To improve understanding of this concept, 1 group of students used a hands-on procedure to sample from small populations representing either a true or false null hypothesis. The distribution of 120 sample means (n = 3) from each population had standard…
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiangyu; Shi, Xianbo; Wang, Yong
The mutual optical intensity (MOI) model is extended to include the propagation of partially coherent radiation through non-ideal mirrors. The propagation of the MOI from the incident to the exit plane of the mirror is realised by local ray tracing. The effects of figure errors can be expressed as phase shifts obtained by either the phase projection approach or the direct path length method. Using the MOI model, the effects of figure errors are studied for diffraction-limited cases using elliptical cylinder mirrors. Figure errors with low spatial frequencies can vary the intensity distribution, redistribute the local coherence function and distortmore » the wavefront, but have no effect on the global degree of coherence. The MOI model is benchmarked againstHYBRIDand the multi-electronSynchrotron Radiation Workshop(SRW) code. The results show that the MOI model gives accurate results under different coherence conditions of the beam. Other than intensity profiles, the MOI model can also provide the wavefront and the local coherence function at any location along the beamline. The capability of tuning the trade-off between accuracy and efficiency makes the MOI model an ideal tool for beamline design and optimization.« less
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
Are Nomothetic or Ideographic Approaches Superior in Predicting Daily Exercise Behaviors?
Cheung, Ying Kuen; Hsueh, Pei-Yun Sabrina; Qian, Min; Yoon, Sunmoo; Meli, Laura; Diaz, Keith M; Schwartz, Joseph E; Kronish, Ian M; Davidson, Karina W
2017-01-01
The understanding of how stress influences health behavior can provide insights into developing healthy lifestyle interventions. This understanding is traditionally attained through observational studies that examine associations at a population level. This nomothetic approach, however, is fundamentally limited by the fact that the environment- person milieu that constitutes stress exposure and experience can vary substantially between individuals, and the modifiable elements of these exposures and experiences are individual-specific. With recent advances in smartphone and sensing technologies, it is now possible to conduct idiographic assessment in users' own environment, leveraging the full-range observations of actions and experiences that result in differential response to naturally occurring events. The aim of this paper is to explore the hypothesis that an ideographic N-of-1 model can better capture an individual's stress- behavior pathway (or the lack thereof) and provide useful person-specific predictors of exercise behavior. This paper used the data collected in an observational study in 79 participants who were followed for up to a 1-year period, wherein their physical activity was continuously and objectively monitored by actigraphy and their stress experience was recorded via ecological momentary assessment on a mobile app. In addition, our analyses considered exogenous and environmental variables retrieved from public archive such as day in a week, daylight time, temperature and precipitation. Leveraging the multiple data sources, we developed prediction algorithms for exercise behavior using random forest and classification tree techniques using a nomothetic approach and an N-of-1 approach. The two approaches were compared based on classification errors in predicting personalized exercise behavior. Eight factors were selected by random forest for the nomothetic decision model, which was used to predict whether a participant would exercise on a particular day. The predictors included previous exercise behavior, emotional factors (e.g., midday stress), external factors such as weather (e.g., temperature), and self-determination factors (e.g., expectation of exercise). The nomothetic model yielded an average classification error of 36%. The ideographic N-of-1 models used on average about two predictors for each individual, and had an average classification error of 25%, which represented an improvement of 11 percentage points. Compared to the traditional one-size-fits-all, nomothetic model that generalizes population-evidence for individuals, the proposed N-of-1 model can better capture the individual difference in their stressbehavior pathways. In this paper, we demonstrate it is feasible to perform personalized exercise behavior prediction, mainly made possible by mobile health technology and machine learning analytics. Schattauer GmbH.
Atmospheric microwave refractivity and refraction
NASA Technical Reports Server (NTRS)
Yu, E.; Hodge, D. B.
1980-01-01
The atmospheric refractivity can be expressed as a function of temperature, pressure, water vapor content, and operating frequency. Based on twenty-year meteorological data, statistics of the atmospheric refractivity were obtained. These statistics were used to estimate the variation of dispersion, attenuation, and refraction effects on microwave and millimeter wave signals propagating along atmospheric paths. Bending angle, elevation angle error, and range error were also developed for an exponentially tapered, spherical atmosphere.
GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.
2007-04-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
Alastruey, Jordi; Hunt, Anthony A E; Weinberg, Peter D
2014-01-01
We present a novel analysis of arterial pulse wave propagation that combines traditional wave intensity analysis with identification of Windkessel pressures to account for the effect on the pressure waveform of peripheral wave reflections. Using haemodynamic data measured in vivo in the rabbit or generated numerically in models of human compliant vessels, we show that traditional wave intensity analysis identifies the timing, direction and magnitude of the predominant waves that shape aortic pressure and flow waveforms in systole, but fails to identify the effect of peripheral reflections. These reflections persist for several cardiac cycles and make up most of the pressure waveform, especially in diastole and early systole. Ignoring peripheral reflections leads to an erroneous indication of a reflection-free period in early systole and additional error in the estimates of (i) pulse wave velocity at the ascending aorta given by the PU–loop method (9.5% error) and (ii) transit time to a dominant reflection site calculated from the wave intensity profile (27% error). These errors decreased to 1.3% and 10%, respectively, when accounting for peripheral reflections. Using our new analysis, we investigate the effect of vessel compliance and peripheral resistance on wave intensity, peripheral reflections and reflections originating in previous cardiac cycles. PMID:24132888
Architectural elements of hybrid navigation systems for future space transportation
NASA Astrophysics Data System (ADS)
Trigo, Guilherme F.; Theil, Stephan
2018-06-01
The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.
The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2012-01-01
The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w r in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w r/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including themore » effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Shi, Xianbo; Reininger, Ruben; Sanchez del Rio, Manuel; ...
2014-05-15
A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The 'Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared withSHADOWresults pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version ofSRWin one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the codemore » is considerably faster than the multi-electron version ofSRWand is therefore a useful tool for beamline design and optimization.« less
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
Representation of layer-counted proxy records as probability densities on error-free time axes
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the relative changes, i.e. in the increments of the proxy values. In such cases, only the relative (non-cumulative) counting errors matter. For the example of the NGRIP records, we show that a precise estimation of these relative changes is in fact possible. References: [1] Goswami et al., Nonlin. Processes Geophys. (2014) [2] Svensson et al., Clim. Past (2008)
NASA Astrophysics Data System (ADS)
Sinha, T.; Arumugam, S.
2012-12-01
Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
McNair, Peter J; Colvin, Matt; Reid, Duncan
2011-02-01
To compare the accuracy of 12 maximal strength (1-repetition maximum [1-RM]) equations for predicting quadriceps strength in people with osteoarthritis (OA) of the knee joint. Eighteen subjects with OA of the knee joint attended a rehabilitation gymnasium on 3 occasions: 1) a familiarization session, 2) a session where the 1-RM of the quadriceps was established using a weights machine for an open-chain knee extension exercise and a leg press exercise, and 3) a session where the subjects performed with a load at which they could lift for approximately 10 repetitions only. The data were used in 12 prediction equations to calculate 1-RM strength and compared to the actual 1-RM data. Data were examined using Bland and Altman graphs and statistics, intraclass correlation coefficients (ICCs), and typical error values between the actual 1-RM and the respective 1-RM prediction equation data. Difference scores (predicted 1-RM--actual 1-RM) across the injured and control legs were also compared. For the knee extension exercise, the Brown, Brzycki, Epley, Lander, Mayhew et al, Poliquin, and Wathen prediction equations demonstrated the greatest levels of predictive accuracy. All of the ICCs were high (range 0.96–0.99), and typical errors were between 3% and 4%. For the knee press exercise, the Adams, Berger, Kemmler et al, and O'Conner et al equations demonstrated the greatest levels of predictive accuracy. All of the ICCs were high (range 0.95-0.98), and the typical errors ranged from 5.9-6.3%. This study provided evidence supporting the use of prediction equations to assess maximal strength in individuals with a knee joint with OA.
Aerobic Exercise and Attention Deficit Hyperactivity Disorder: Brain Research
Choi, Jae Won; Han, Doug Hyun; Kang, Kyung Doo; Jung, Hye Yeon; Renshaw, Perry F.
2017-01-01
Purpose As adjuvant therapy for enhancing the effects of stimulants and thereby minimizing medication doses, we hypothesized that aerobic exercise might be an effective adjunctive therapy for enhancing the effects of methylphenidate on the clinical symptoms, cognitive function, and brain activity of adolescents with attention deficit hyperactivity disorder (ADHD). Methods Thirty-five adolescents with ADHD were randomly assigned to one of two groups in a 1/1 ratio; methylphenidate treatment + 6-wk exercise (sports-ADHD) or methylphenidate treatment + 6-wk education (edu-ADHD). At baseline and after 6 wk of treatment, symptoms of ADHD, cognitive function, and brain activity were evaluated using the Dupaul attention deficit hyperactivity disorder rating scale–Korean version (K-ARS), the Wisconsin Card Sorting Test, and 3-T functional magnetic resonance imaging, respectively. Results The K-ARS total score and perseverative errors in the sports-ADHD group decreased compared with those in the edu-ADHD group. After the 6-wk treatment period, the mean β value of the right frontal lobe in the sports-ADHD group increased compared with that in the edu-ADHD group. The mean β value of the right temporal lobe in the sports-ADHD group decreased. However, the mean β value of the right temporal lobe in the edu-ADHD group did not change. The change in activity within the right prefrontal cortex in all adolescents with ADHD was negatively correlated with the change in K-ARS scores and perseverative errors. Conclusions The current results indicate that aerobic exercise increased the effectiveness of methylphenidate on clinical symptoms, perseverative errors, and brain activity within the right frontal and temporal cortices in response to the Wisconsin card sorting test stimulation. PMID:24824770
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1985-01-01
Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.
Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed
2017-11-01
The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.
Reach and speed of judgment propagation in the laboratory.
Moussaïd, Mehdi; Herzog, Stefan M; Kämmer, Juliane E; Hertwig, Ralph
2017-04-18
In recent years, a large body of research has demonstrated that judgments and behaviors can propagate from person to person. Phenomena as diverse as political mobilization, health practices, altruism, and emotional states exhibit similar dynamics of social contagion. The precise mechanisms of judgment propagation are not well understood, however, because it is difficult to control for confounding factors such as homophily or dynamic network structures. We introduce an experimental design that renders possible the stringent study of judgment propagation. In this design, experimental chains of individuals can revise their initial judgment in a visual perception task after observing a predecessor's judgment. The positioning of a very good performer at the top of a chain created a performance gap, which triggered waves of judgment propagation down the chain. We evaluated the dynamics of judgment propagation experimentally. Despite strong social influence within pairs of individuals, the reach of judgment propagation across a chain rarely exceeded a social distance of three to four degrees of separation. Furthermore, computer simulations showed that the speed of judgment propagation decayed exponentially with the social distance from the source. We show that information distortion and the overweighting of other people's errors are two individual-level mechanisms hindering judgment propagation at the scale of the chain. Our results contribute to the understanding of social-contagion processes, and our experimental method offers numerous new opportunities to study judgment propagation in the laboratory.
Reach and speed of judgment propagation in the laboratory
Herzog, Stefan M.; Kämmer, Juliane E.; Hertwig, Ralph
2017-01-01
In recent years, a large body of research has demonstrated that judgments and behaviors can propagate from person to person. Phenomena as diverse as political mobilization, health practices, altruism, and emotional states exhibit similar dynamics of social contagion. The precise mechanisms of judgment propagation are not well understood, however, because it is difficult to control for confounding factors such as homophily or dynamic network structures. We introduce an experimental design that renders possible the stringent study of judgment propagation. In this design, experimental chains of individuals can revise their initial judgment in a visual perception task after observing a predecessor’s judgment. The positioning of a very good performer at the top of a chain created a performance gap, which triggered waves of judgment propagation down the chain. We evaluated the dynamics of judgment propagation experimentally. Despite strong social influence within pairs of individuals, the reach of judgment propagation across a chain rarely exceeded a social distance of three to four degrees of separation. Furthermore, computer simulations showed that the speed of judgment propagation decayed exponentially with the social distance from the source. We show that information distortion and the overweighting of other people’s errors are two individual-level mechanisms hindering judgment propagation at the scale of the chain. Our results contribute to the understanding of social-contagion processes, and our experimental method offers numerous new opportunities to study judgment propagation in the laboratory. PMID:28373540
Learning by observation: insights from Williams syndrome.
Foti, Francesca; Menghini, Deny; Mandolesi, Laura; Federico, Francesca; Vicari, Stefano; Petrosini, Laura
2013-01-01
Observing another person performing a complex action accelerates the observer's acquisition of the same action and limits the time-consuming process of learning by trial and error. Observational learning makes an interesting and potentially important topic in the developmental domain, especially when disorders are considered. The implications of studies aimed at clarifying whether and how this form of learning is spared by pathology are manifold. We focused on a specific population with learning and intellectual disabilities, the individuals with Williams syndrome. The performance of twenty-eight individuals with Williams syndrome was compared with that of mental age- and gender-matched thirty-two typically developing children on tasks of learning of a visuo-motor sequence by observation or by trial and error. Regardless of the learning modality, acquiring the correct sequence involved three main phases: a detection phase, in which participants discovered the correct sequence and learned how to perform the task; an exercise phase, in which they reproduced the sequence until performance was error-free; an automatization phase, in which by repeating the error-free sequence they became accurate and speedy. Participants with Williams syndrome beneficiated of observational training (in which they observed an actor detecting the visuo-motor sequence) in the detection phase, while they performed worse than typically developing children in the exercise and automatization phases. Thus, by exploiting competencies learned by observation, individuals with Williams syndrome detected the visuo-motor sequence, putting into action the appropriate procedural strategies. Conversely, their impaired performances in the exercise phases appeared linked to impaired spatial working memory, while their deficits in automatization phases to deficits in processes increasing efficiency and speed of the response. Overall, observational experience was advantageous for acquiring competencies, since it primed subjects' interest in the actions to be performed and functioned as a catalyst for executed action.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.
2009-01-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
Potential errors in body composition as estimated by whole body scintillation counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.
Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts weremore » highly correlated (r . 0.87, p less than 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.« less
Potential errors in body composition as estimated by whole body scintillation counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.
Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts weremore » highly correlated (r = 0.87, p < 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.« less
Beck, Eric N; Intzandt, Brittany N; Almeida, Quincy J
2018-01-01
It may be possible to use attention-based exercise to decrease demands associated with walking in Parkinson's disease (PD), and thus improve dual task walking ability. For example, an external focus of attention (focusing on the effect of an action on the environment) may recruit automatic control processes degenerated in PD, whereas an internal focus (limb movement) may recruit conscious (nonautomatic) control processes. Thus, we aimed to investigate how externally and internally focused exercise influences dual task walking and symptom severity in PD. Forty-seven participants with PD were randomized to either an Externally (n = 24) or Internally (n = 23) focused group and completed 33 one-hour attention-based exercise sessions over 11 weeks. In addition, 16 participants were part of a control group. Before, after, and 8 weeks following the program (pre/post/washout), gait patterns were measured during single and dual task walking (digit-monitoring task, ie, walking while counting numbers announced by an audio-track), and symptom severity (UPDRS-III) was assessed ON and OFF dopamine replacement. Pairwise comparisons (95% confidence intervals [CIs]) and repeated-measures analyses of variance were conducted. Pre to post: Dual task step time decreased in the external group (Δ = 0.02 seconds, CI 0.01-0.04). Dual task step length (Δ = 2.3 cm, CI 0.86-3.75) and velocity (Δ = 4.5 cm/s, CI 0.59-8.48) decreased (became worse) in the internal group. UPDRS-III scores (ON and OFF) decreased (improved) in only the External group. Pre to washout: Dual task step time ( P = .005) and percentage in double support ( P = .014) significantly decreased (improved) in both exercise groups, although only the internal group increased error on the secondary counting task (ie, more errors monitoring numbers). UPDRS-III scores in both exercise groups significantly decreased ( P = .001). Since dual task walking improvements were found immediately, and 8 weeks after the cessation of an externally focused exercise program, we conclude that externally focused exercise may improve on functioning of automatic control networks in PD. Internally focused exercise hindered dual tasking ability. Overall, externally focused exercise led to greater rehabilitation benefits in dual tasking and motor symptoms compared with internally focused exercise.
Development of Precise Lunar Orbit Propagator and Lunar Polar Orbiter's Lifetime Analysis
NASA Astrophysics Data System (ADS)
Song, Young-Joo; Park, Sang-Young; Kim, Hae-Dong; Sim, Eun-Sup
2010-06-01
To prepare for a Korean lunar orbiter mission, a precise lunar orbit propagator; Yonsei precise lunar orbit propagator (YSPLOP) is developed. In the propagator, accelerations due to the Moon's non-spherical gravity, the point masses of the Earth, Moon, Sun, Mars, Jupiter and also, solar radiation pressures can be included. The developed propagator's performance is validated and propagation errors between YSPOLP and STK/Astrogator are found to have about maximum 4-m, in along-track direction during 30 days (Earth's time) of propagation. Also, it is found that the lifetime of a lunar polar orbiter is strongly affected by the different degrees and orders of the lunar gravity model, by a third body's gravitational attractions (especially the Earth), and by the different orbital inclinations. The reliable lifetime of circular lunar polar orbiter at about 100 km altitude is estimated to have about 160 days (Earth's time). However, to estimate the reasonable lifetime of circular lunar polar orbiter at about 100 km altitude, it is strongly recommended to consider at least 50 × 50 degrees and orders of the lunar gravity field. The results provided in this paper are expected to make further progress in the design fields of Korea's lunar orbiter missions.
1984-03-16
cooperation, making it impractical for large-scale studies. Today, the hydrostatic method is used primarily in laboratory studies by exercise physiologists...inappropriate equations could cause serious errors and result in adverse advice being given to the athlete concerning dietary habits and/or exercise ...football players. Their results showed that when black and white football players are matched somatotypically (type of body build), there is no significant
1991-05-01
Hall, 1967. 6. Rosenblatt, F., Principles of Neurodynamics , Spartan Books, 1962. 7. Minsky, M. and Papert, S., Perceptrons, MIT Press, Revised Edition...sentations by Error Propagation, Rumelhart and McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition , Vol
Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Y.; Song, Y.; Lu, J.
2018-05-01
Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Data vs. information: A system paradigm
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1982-01-01
The data system designer requires data parameters, and is dependent on the user to convert information needs to these data parameters. This conversion will be done with more or less accuracy, beginning a chain of inaccuracies which propagate through the system, and which, in the end, may prevent the user from converting the data received into the information required. The concept to be pursued is that errors occur in various parts of the system, and, having occurred, propagate to the end. Modeling of the system may allow an estimation of the effects at any point and the final accumulated effect, and may prove a method of allocating an error budget among the system components. The selection of the various technical parameters which a data system must meet must be done in relation to the ability of the user to turn the cold, impersonal data into a live, personal decision or piece of information.
Bluetooth Heart Rate Monitors For Spaceflight
NASA Technical Reports Server (NTRS)
Buxton, R. E.; West, M. R.; Kalogera, K. L.; Hanson, A. M.
2016-01-01
Heart rate monitoring is required for crewmembers during exercise aboard the International Space Station (ISS) and will be for future exploration missions. The cardiovascular system must be sufficiently stressed throughout a mission to maintain the ability to perform nominal and contingency/emergency tasks. High quality heart rate data are required to accurately determine the intensity of exercise performed by the crewmembers and show maintenance of VO2max. The quality of the data collected on ISS is subject to multiple limitations and is insufficient to meet current requirements. PURPOSE: To evaluate the performance of commercially available Bluetooth heart rate monitors (BT_HRM) and their ability to provide high quality heart rate data to monitor crew health aboard the ISS and during future exploration missions. METHODS: Nineteen subjects completed 30 data collection sessions of various intensities on the treadmill and/or cycle. Subjects wore several BT_HRM technologies for each testing session. One electrode-based chest strap (CS) was worn, while one or more optical sensors (OS) were worn. Subjects were instrumented with a 12-lead ECG to compare the heart rate data from the Bluetooth sensors. Each BT_HRM data set was time matched to the ECG data and a +/-5bpm threshold was applied to the difference between the 2 data sets. Percent error was calculated based on the number of data points outside the threshold and the total number of data points. RESULTS: The electrode-based chest straps performed better than the optical sensors. The best performing CS was CS1 (1.6% error), followed by CS4 (3.3% error), CS3 (6.4% error), and CS2 (9.2% error). The OS resulted in 10.4% error for OS1 and 14.9% error for OS2. CONCLUSIONS: The highest quality data came from CS1, but unfortunately it has been discontinued by the manufacturer. The optical sensors have not been ruled out for use, but more investigation is needed to determine how to obtain the best quality data. CS2 will be used in an ISS Bluetooth validation study, because it simultaneously transmits magnetic pulse that is integrated with existing exercise hardware on ISS. The simultaneous data streams allow for beat-to-beat comparison between the current ISS standard and CS2. Upon Bluetooth validation aboard ISS, the research team will down select a new BT_HRM for operational use.
Bluetooth(Registered Trademark) Heart Rate Monitors for Spaceflight
NASA Technical Reports Server (NTRS)
Buxton, Roxanne E.; West, Michael R.; Kalogera, Kent L.; Hanson, Andrea M.
2016-01-01
Heart rate monitoring is required during exercise for crewmembers aboard the International Space Station (ISS) and will be for future exploration missions. The cardiovascular system must be sufficiently stressed throughout a mission to maintain the ability to perform nominal and contingency/emergency tasks. High quality heart rate data is required to accurately determine the intensity of exercise performed by the crewmembers and show maintenance of VO2max. The quality of the data collected on ISS is subject to multiple limitations and is insufficient to meet current requirements. PURPOSE: To evaluate the performance of commercially available Bluetooth® heart rate monitors (BT_HRM) and their ability to provide high quality heart rate data to monitor crew health on board ISS and during future exploration missions. METHODS: Nineteen subjects completed 30 data collection sessions of various intensities on the treadmill and/or cycle. Subjects wore several BT_HRM technologies for each testing session. One electrode-based chest strap (CS) was worn, while one or more optical sensors (OS) was worn. Subjects were instrumented with a 12-lead ECG to compare the heart rate data from the Bluetooth sensors. Each BT_RHM data set was time matched to the ECG data and a +/-5bpm threshold was applied to the difference between the two data sets. Percent error was calculated based on the number of data points outside the threshold and the total number of data points. REULTS: The electrode-based chest straps performed better than the optical sensors. The best performing CS was CS1 (1.6%error), followed by CS4 (3.3%error), CS3 (6.4%error), and CS2 (9.2%error). The OS resulted in 10.4% error for OS1 and 14.9% error for OS2. CONCLUSIONS: The highest quality data came from CS1, unfortunately it has been discontinued by the manufacturer. The optical sensors have not been ruled out for use, but more investigation is needed to determine how to get the best quality data. CS2 will be used in an ISS Bluetooth validation study, because it simultaneously transmits Magnetic Pulse which is integrated with existing exercise hardware on ISS. The simultaneous data streams allow for beat to beat comparison between the current ISS standard and CS2.Upon Bluetooth(Registered Trademark) validation aboard ISS, down select of a new BT_HRM for operational use will be made.
Interference competition and invasion: spatial structure, novel weapons and resistance zones.
Allstadt, Andrew; Caraco, Thomas; Molnár, F; Korniss, G
2012-08-07
Certain invasive plants may rely on interference mechanisms (e.g., allelopathy) to gain competitive superiority over native species. But expending resources on interference presumably exacts a cost in another life-history trait, so that the significance of interference competition for invasion ecology remains uncertain. We model ecological invasion when combined effects of preemptive and interference competition govern interactions at the neighborhood scale. We consider three cases. Under "novel weapons," only the initially rare invader exercises interference. For "resistance zones" only the resident species interferes, and finally we take both species as interference competitors. Interference increases the other species' mortality, opening space for colonization. However, a species exercising greater interference has reduced propagation, which can hinder its colonization of open sites. Interference never enhances a rare invader's growth in the homogeneously mixing approximation to our model. But interference can significantly increase an invader's competitiveness, and its growth when rare, if interactions are structured spatially. That is, interference can increase an invader's success when colonization of open sites depends on local, rather than global, species densities. In contrast, interference enhances the common, resident species' resistance to invasion independently of spatial structure, unless the propagation-cost is too great. The particular combination of propagation and interference producing the strongest biotic resistance in a resident species depends on the shape of the tradeoff between the two traits. Increases in background mortality (i.e., mortality not due to interference) always reduce the effectiveness of interference competition. Copyright © 2012 Elsevier Ltd. All rights reserved.
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
...). Here, the Navy identifies the distance that a marine mammal is likely to travel during the time... typically travel within a given time-delay period (Table 1). Based on acoustic propagation modeling... Speed and Length of Time-Delay Potential Species group Swim speed Time-delay (min) distance traveled (yd...
2014-09-30
exercises, the most abundant species by biomass is Pacific hake, Merluccius productus, a fish with an air-filled swimbladder that averages 50 cm in length...its type of prey) may affect the scattering characteristics of the animal especially if the animal has eaten hard- shelled mollusc prey. Figure 7
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgered, G.; Davis, J.L.; Herring, T.A.
1991-04-10
An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less
Nguyen, Kieu T H; Adamkiewicz, Marta A; Hebert, Lauren E; Zygiel, Emily M; Boyle, Holly R; Martone, Christina M; Meléndez-Ríos, Carola B; Noren, Karen A; Noren, Christopher J; Hall, Marilena Fitzsimons
2014-10-01
A target-unrelated peptide (TUP) can arise in phage display selection experiments as a result of a propagation advantage exhibited by the phage clone displaying the peptide. We previously characterized HAIYPRH, from the M13-based Ph.D.-7 phage display library, as a propagation-related TUP resulting from a G→A mutation in the Shine-Dalgarno sequence of gene II. This mutant was shown to propagate in Escherichia coli at a dramatically faster rate than phage bearing the wild-type Shine-Dalgarno sequence. We now report 27 additional fast-propagating clones displaying 24 different peptides and carrying 14 unique mutations. Most of these mutations are found either in or upstream of the gene II Shine-Dalgarno sequence, but still within the mRNA transcript of gene II. All 27 clones propagate at significantly higher rates than normal library phage, most within experimental error of wild-type M13 propagation, suggesting that mutations arise to compensate for the reduced virulence caused by the insertion of a lacZα cassette proximal to the replication origin of the phage used to construct the library. We also describe an efficient and convenient assay to diagnose propagation-related TUPS among peptide sequences selected by phage display. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
NASA Astrophysics Data System (ADS)
Olivier, Thomas; Billard, Franck; Akhouayri, Hassan
2004-06-01
Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.
Lin, Hsueh-Chun; Chiang, Shu-Yin; Lee, Kai; Kan, Yao-Chiang
2015-01-19
This paper proposes a model for recognizing motions performed during rehabilitation exercises for frozen shoulder conditions. The model consists of wearable wireless sensor network (WSN) inertial sensor nodes, which were developed for this study, and enables the ubiquitous measurement of bodily motions. The model employs the back propagation neural network (BPNN) algorithm to compute motion data that are formed in the WSN packets; herein, six types of rehabilitation exercises were recognized. The packets sent by each node are converted into six components of acceleration and angular velocity according to three axes. Motor features such as basic acceleration, angular velocity, and derivative tilt angle were input into the training procedure of the BPNN algorithm. In measurements of thirteen volunteers, the accelerations and included angles of nodes were adopted from possible features to demonstrate the procedure. Five exercises involving simple swinging and stretching movements were recognized with an accuracy of 85%-95%; however, the accuracy with which exercises entailing spiral rotations were recognized approximately 60%. Thus, a characteristic space and enveloped spectrum improving derivative features were suggested to enable identifying customized parameters. Finally, a real-time monitoring interface was developed for practical implementation. The proposed model can be applied in ubiquitous healthcare self-management to recognize rehabilitation exercises.
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
Time estimates in a long-term time-free environment. [human performance
NASA Technical Reports Server (NTRS)
Lavie, P.; Webb, W. B.
1975-01-01
Subjects in a time-free environment for 14 days estimated the hour and day several times a day. Half of the subjects were under a heavy exercise regime. During the waking hours, the no-exercise group showed no difference between estimated and real time, whereas the exercise group showed significantly shorter estimated than real time. Neither group showed a difference after the sleeping periods. However, the mean accumulated error for the two groups was 48.73 hours and was strongly related to the displacements of sleep/waking behavior. It is concluded that behavioral cues are the primary determinants of time estimates in time-free environments.
Comparison of Consumer and Research Monitors under Semistructured Settings.
Bai, Yang; Welk, Gregory J; Nam, Yoon Ho; Lee, Joey A; Lee, Jung-Min; Kim, Youngwon; Meier, Nathan F; Dixon, Philip M
2016-01-01
This study evaluated the relative validity of different consumer and research activity monitors during semistructured periods of sedentary activity, aerobic exercise, and resistance exercise. Fifty-two (28 male and 24 female) participants age 18-65 yr performed 20 min of self-selected sedentary activity, 25 min of aerobic exercise, and 25 min of resistance exercise, with 5 min of rest between each activity. Each participant wore five wrist-worn consumer monitors [Fitbit Flex, Jawbone Up24, Misfit Shine (MS), Nike+ Fuelband SE (NFS), and Polar Loop] and two research monitors [ActiGraph GT3X+ on the waist and BodyMedia Core (BMC) on the arm] while being concurrently monitored with Oxycon Mobile (OM), a portable metabolic measuring system. Energy expenditure (EE) on different activity sessions was measured by OM and estimated by all monitors. Mean absolute percent error (MAPE) values for the full 80-min protocol ranged from 15.3% (BMC) to 30.4% (MS). EE estimates from ActiGraph GT3X+ were found to be equivalent to those from OM (± 10% equivalence zone, 285.1-348.5). Correlations between OM and the various monitors were generally high (ranged between 0.71 and 0.90). Three monitors had MAPE values lower than 20% for sedentary activity: BMC (15.7%), MS (18.2%), and NFS (20.0%). Two monitors had MAPE values lower than 20% for aerobic exercise: BMC (17.2%) and NFS (18.5%). None of the monitors had MAPE values lower than 25% for resistance exercise. Overall, the research monitors and Fitbit Flex, Jawbone Up24, and NFS provided reasonably accurate total EE estimates at the individual level. However, larger error was evident for individual activities, especially resistance exercise. Further research is needed to examine these monitors across various activities and intensities as well as under real-world conditions.
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
NASA Astrophysics Data System (ADS)
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
Ueda, Michihito; Nishitani, Yu; Kaneko, Yukihiro; Omote, Atsushi
2014-01-01
To realize an analog artificial neural network hardware, the circuit element for synapse function is important because the number of synapse elements is much larger than that of neuron elements. One of the candidates for this synapse element is a ferroelectric memristor. This device functions as a voltage controllable variable resistor, which can be applied to a synapse weight. However, its conductance shows hysteresis characteristics and dispersion to the input voltage. Therefore, the conductance values vary according to the history of the height and the width of the applied pulse voltage. Due to the difficulty of controlling the accurate conductance, it is not easy to apply the back-propagation learning algorithm to the neural network hardware having memristor synapses. To solve this problem, we proposed and simulated a learning operation procedure as follows. Employing a weight perturbation technique, we derived the error change. When the error reduced, the next pulse voltage was updated according to the back-propagation learning algorithm. If the error increased the amplitude of the next voltage pulse was set in such way as to cause similar memristor conductance but in the opposite voltage scanning direction. By this operation, we could eliminate the hysteresis and confirmed that the simulation of the learning operation converged. We also adopted conductance dispersion numerically in the simulation. We examined the probability that the error decreased to a designated value within a predetermined loop number. The ferroelectric has the characteristics that the magnitude of polarization does not become smaller when voltages having the same polarity are applied. These characteristics greatly improved the probability even if the learning rate was small, if the magnitude of the dispersion is adequate. Because the dispersion of analog circuit elements is inevitable, this learning operation procedure is useful for analog neural network hardware. PMID:25393715
Exercise in muscle glycogen storage diseases.
Preisler, Nicolai; Haller, Ronald G; Vissing, John
2015-05-01
Glycogen storage diseases (GSD) are inborn errors of glycogen or glucose metabolism. In the GSDs that affect muscle, the consequence of a block in skeletal muscle glycogen breakdown or glucose use, is an impairment of muscular performance and exercise intolerance, owing to 1) an increase in glycogen storage that disrupts contractile function and/or 2) a reduced substrate turnover below the block, which inhibits skeletal muscle ATP production. Immobility is associated with metabolic alterations in muscle leading to an increased dependence on glycogen use and a reduced capacity for fatty acid oxidation. Such changes may be detrimental for persons with GSD from a metabolic perspective. However, exercise may alter skeletal muscle substrate metabolism in ways that are beneficial for patients with GSD, such as improving exercise tolerance and increasing fatty acid oxidation. In addition, a regular exercise program has the potential to improve general health and fitness and improve quality of life, if executed properly. In this review, we describe skeletal muscle substrate use during exercise in GSDs, and how blocks in metabolic pathways affect exercise tolerance in GSDs. We review the studies that have examined the effect of regular exercise training in different types of GSD. Finally, we consider how oral substrate supplementation can improve exercise tolerance and we discuss the precautions that apply to persons with GSD that engage in exercise.
A Q-Band Free-Space Characterization of Carbon Nanotube Composites
Hassan, Ahmed M.; Garboczi, Edward J.
2016-01-01
We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959
Boundary identification and error analysis of shocked material images
NASA Astrophysics Data System (ADS)
Hock, Margaret; Howard, Marylesa; Cooper, Leora; Meehan, Bernard; Nelson, Keith
2017-06-01
To compute quantities such as pressure and velocity from laser-induced shock waves propagating through materials, high-speed images are captured and analyzed. Shock images typically display high noise and spatially-varying intensities, causing conventional analysis techniques to have difficulty identifying boundaries in the images without making significant assumptions about the data. We present a novel machine learning algorithm that efficiently segments, or partitions, images with high noise and spatially-varying intensities, and provides error maps that describe a level of uncertainty in the partitioning. The user trains the algorithm by providing locations of known materials within the image but no assumptions are made on the geometries in the image. The error maps are used to provide lower and upper bounds on quantities of interest, such as velocity and pressure, once boundaries have been identified and propagated through equations of state. This algorithm will be demonstrated on images of shock waves with noise and aberrations to quantify properties of the wave as it progresses. DOE/NV/25946-3126 This work was done by National Security Technologies, LLC, under Contract No. DE- AC52-06NA25946 with the U.S. Department of Energy and supported by the SDRD Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X.; Wilcox, G.L.
1993-12-31
We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
Grabowski, Patrick; Wilson, John; Walker, Alyssa; Enz, Dan; Wang, Sijian
2017-01-01
Demonstrate implementation, safety and feasibility of multimodal, impairment-based physical therapy (PT) combining vestibular/oculomotor and cervical rehabilitation with sub-symptom threshold exercise for the treatment of patients with post-concussion syndrome (PCS). University hospital outpatient sports medicine facility. Twenty-five patients (12-20 years old) meeting World Health Organization criteria for PCS following sport-related concussion referred for supervised PT consisting of sub-symptom cardiovascular exercise, vestibular/oculomotor and cervical spine rehabilitation. Retrospective cohort. Post-Concussion Symptom Scale (PCSS) total score, maximum symptom-free heart rate (SFHR) during graded exercise testing (GXT), GXT duration, balance error scoring system (BESS) score, and number of adverse events. Patients demonstrated a statistically significant decreasing trend (p < 0.01) for total PCSS scores (pre-PT M = 18.2 (SD = 14.2), post-PT M = 9.1 (SD = 10.8), n = 25). Maximum SFHR achieved on GXT increased 23% (p < 0.01, n = 14), and BESS errors decreased 52% (p < 0.01, n = 13). Two patients reported mild symptom exacerbation with aerobic exercise at home, attenuated by adjustment of the home exercise program. Multimodal, impairment-based PT is safe and associated with diminishing PCS symptoms. This establishes feasibility for future clinical trials to determine viable treatment approaches to reduce symptoms and improve function while avoiding negative repercussions of physical inactivity and premature return to full activity. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
The Effects of Aerobic Exercise and Gaming on Cognitive Performance.
Douris, Peter C; Handrakis, John P; Apergis, Demitra; Mangus, Robert B; Patel, Rima; Limtao, Jessica; Platonova, Svetlana; Gregorio, Aladino; Luty, Elliot
2018-03-01
The purpose of our study was to investigate the effects of video gaming, aerobic exercise (biking), and the combination of these two activities on the domains of cognitive performance: selective attention, processing speed, and executive functioning. The study was a randomized clinical trial with 40 subjects (mean age 23.7 ± 1.8 years) randomized to one of four thirty-minute conditions: video gaming, biking, simultaneous gaming and biking, and a control condition. Cognitive performance was measured pre and post condition using the Stroop test and Trails B test. A mixed design was utilized. While video gaming, biking, simultaneous gaming and biking conditions improved selective attention and processing speed (p < 0.05), only the bike condition improved the highest order of cognitive performance, executive function (p < 0.01). There were no changes in cognitive performance for the control condition. Previous studies have shown that if tasks approach the limits of attentional capacity there is an increase in the overall chance for errors, known as the dual-task deficit. Simultaneous biking and gaming may have surpassed attentional capacity limits, ultimately increasing errors during the executive function tests of our cognitive performance battery. The results suggest that the fatiguing effects of a combined physically and mentally challenging task that extends after the exercise cessation may overcome the eventual beneficial cognitive effects derived from the physical exercise.
Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G
2015-01-01
To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.
Channel simulation to facilitate mobile-satellite communications research
NASA Technical Reports Server (NTRS)
Davarian, Faramaz
1987-01-01
The mobile-satellite-service channel simulator, which is a facility for an end-to-end hardware simulation of mobile satellite communications links is discussed. Propagation effects, Doppler, interference, band limiting, satellite nonlinearity, and thermal noise have been incorporated into the simulator. The propagation environment in which the simulator needs to operate and the architecture of the simulator are described. The simulator is composed of: a mobile/fixed transmitter, interference transmitters, a propagation path simulator, a spacecraft, and a fixed/mobile receiver. Data from application experiments conducted with the channel simulator are presented; the noise converison technique to evaluate interference effects, the error floor phenomenon of digital multipath fading links, and the fade margin associated with a noncoherent receiver are examined. Diagrams of the simulator are provided.
Wideband propagation measurements at 30.3 GHz through a pecan orchard in Texas
NASA Astrophysics Data System (ADS)
Papazian, Peter B.; Jones, David L.; Espeland, Richard H.
1992-09-01
Wideband propagation measurements were made in a pecan orchard in Texas during April and August of 1990 to examine the propagation characteristics of millimeter-wave signals through vegetation. Measurements were made on tree obstructed paths with and without leaves. The study presents narrowband attenuation data at 9.6 and 28.8 GHz as well as wideband impulse response measurements at 30.3 GHz. The wideband probe (Violette et al., 1983), provides amplitude and delay of reflected and scattered signals and bit-error rate. This is accomplished using a 500 MBit/sec pseudo-random code to BPSK modulate a 28.8 GHz carrier. The channel impulse response is then extracted by cross correlating the received pseudo-random sequence with a locally generated replica.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
CEMERLL: The Propagation of an Atmosphere-Compensated Laser Beam to the Apollo 15 Lunar Array
NASA Technical Reports Server (NTRS)
Fugate, R. Q.; Leatherman, P. R.; Wilson, K. E.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes.
Research Effort in Atmospheric Propagation.
velocity and air mean free path on wire microthermal measurements was reported. The results were that the procedure of calibrating a microthermal ...molecular mean free path is larger can increase the error another 4%. A discussion of refractive index spectra obtained from airborne microthermal
Fat and Sugar Metabolism During Exercise in Patients With Metabolic Myopathy
2017-08-31
Metabolism, Inborn Errors; Lipid Metabolism, Inborn Errors; Carbohydrate Metabolism, Inborn Errors; Long-Chain 3-Hydroxyacyl-CoA Dehydrogenase Deficiency; Glycogenin-1 Deficiency (Glycogen Storage Disease Type XV); Carnitine Palmitoyl Transferase 2 Deficiency; VLCAD Deficiency; Medium-chain Acyl-CoA Dehydrogenase Deficiency; Multiple Acyl-CoA Dehydrogenase Deficiency; Carnitine Transporter Deficiency; Neutral Lipid Storage Disease; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Muscle Phosphofructokinase Deficiency; Phosphoglucomutase 1 Deficiency; Phosphoglycerate Mutase Deficiency; Phosphoglycerate Kinase Deficiency; Phosphorylase Kinase Deficiency; Beta Enolase Deficiency; Lactate Dehydrogenase Deficiency; Glycogen Synthase Deficiency
Statistical error propagation in ab initio no-core full configuration calculations of light nuclei
Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; ...
2015-12-28
We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. Here, we also study the sensitivity of the magnetic moment and proton radius of the 3 H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ E stat (3H) = 0.015 MeV and Δ E stat ( 4He) = 0.055 MeV .
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
Preconditioning the Helmholtz Equation for Rigid Ducts
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1998-01-01
An innovative hyperbolic preconditioning technique is developed for the numerical solution of the Helmholtz equation which governs acoustic propagation in ducts. Two pseudo-time parameters are used to produce an explicit iterative finite difference scheme. This scheme eliminates the large matrix storage requirements normally associated with numerical solutions to the Helmholtz equation. The solution procedure is very fast when compared to other transient and steady methods. Optimization and an error analysis of the preconditioning factors are present. For validation, the method is applied to sound propagation in a 2D semi-infinite hard wall duct.
A Comprehensive Revision of the Logistics Planning Exercise (Log-Plan-X).
1981-06-01
teaching objectives. The difference between conventional teaching methods and simulation rests in the fact that most conventional techniques focus on...Communication and Humanitie. AFIT/LSH, WPAFB OH 45433220 V&. MONITORING AGENCY NAME9 & ADORES(II different fron Ca.U.Ufind Office) is. SECURITY UNCLASSIFIED I...error systems in real life can be very costly. Simulations can be an efficient and effective alternative to such trial and error methods by allowing
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
The Use of Neural Networks for Determining Tank Routes
1992-09-01
ADDRESS (City, State, and ZIP Code) Monterey, CA 93943-5000 Monterey, CA 93943-5000 &a. NAME OF FUNDINGJSPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT...Weights Figure 1. Neural Network Architecture 6 The back-error propagation technique iteratively assigns weights to connections, computes the errors...neurons as the start. From that we decided to try 4, 6 , 8, 10, 12, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90 and 100 or until it was obvious that
Huizinga, Richard J.
2011-01-01
The size of the scour holes observed at the surveyed sites likely was affected by the low to moderate flow conditions on the Missouri and Mississippi Rivers at the time of the surveys. The scour holes likely would be larger during conditions of increased flow. Artifacts of horizontal positioning errors were present in the data, but an analysis of the surveys indicated that most of the bathymetric data have a total propagated error of less than 0.33 foot.
Control of secondary electrons from ion beam impact using a positive potential electrode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, T. P., E-mail: tpcrowley@xanthotechnologies.com; Demers, D. R.; Fimognari, P. J.
2016-11-15
Secondary electrons emitted when an ion beam impacts a detector can amplify the ion beam signal, but also introduce errors if electrons from one detector propagate to another. A potassium ion beam and a detector comprised of ten impact wires, four split-plates, and a pair of biased electrodes were used to demonstrate that a low-voltage, positive electrode can be used to maintain the beneficial amplification effect while greatly reducing the error introduced from the electrons traveling between detector elements.
Developing a confidence metric for the Landsat land surface temperature product
NASA Astrophysics Data System (ADS)
Laraby, Kelly G.; Schott, John R.; Raqueno, Nina
2016-05-01
Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.
Nurses' Responses and Reactions to an Emergent Pediatric Simulation Exercise.
Hoffman, Kenneth; von Sadovszky, Victoria
Pediatric nurses' responses and reactions in emergent simulations are understudied. Using authority gradient theory as a guide, the purpose of this study was to examine nurses' reactions during an emergency simulation exercise when directed to give an incorrect medication dose. Ten groups of noncritical care nurses were videotaped from the beginning of the simulation through debriefing. Although errors were made during the simulation event, all groups responded correctly during debriefing, indicating that authority gradient may play a role in clinical decision-making.
Deductive Verification of Cryptographic Software
NASA Technical Reports Server (NTRS)
Almeida, Jose Barcelar; Barbosa, Manuel; Pinto, Jorge Sousa; Vieira, Barbara
2009-01-01
We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation.
Performance Characterization of an Instrument.
ERIC Educational Resources Information Center
Salin, Eric D.
1984-01-01
Describes an experiment designed to teach students to apply the same statistical awareness to instrumentation they commonly apply to classical techniques. Uses propagation of error techniques to pinpoint instrumental limitations and breakdowns and to demonstrate capabilities and limitations of volumetric and gravimetric methods. Provides lists of…
Managing insulin therapy during exercise in type 1 diabetes mellitus.
Toni, Sonia; Reali, Maria Francesca; Barni, Federica; Lenzi, Lorenzo; Festini, Filippo
2006-01-01
Exercise is integral to the life of T1DM subjects. Several factors influence the metabolic response to exercise in these patients. Despite physical and psychological benefits of exercise, its hypo- and hyperglycemic effects may cause discouragement from participation in sports and games. To use existing evidence from literature to provide practical indications for the management of insulin therapy in subjects with T1DM who practice sports or physical activities. Bibliographic research was performed on PubMed and the main Systematic Review and Guidelines database were also searched. Existing guidelines are useful but the exact adjustments of insulin dose must be made on an individual basis and these adjustments can be made only by "trial and error" approach. These clinical indications may be a starting point from which health care providers can find practical advices for each patient.
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.
2018-07-01
The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.; Dearborn, D. S.; Miller, P. L.
2015-12-01
The annual probability of an asteroid impact is low, but over time, such catastrophic events are inevitable. Interest in assessing the impact consequences has led us to develop a physics-based framework to seamlessly simulate the event from entry to impact, including air and water shock propagation and wave generation. The non-linear effects are simulated using the hydrodynamics code GEODYN. As effects propagate outward, they become a wave source for the linear-elastic-wave propagation code, WPP/WWP. The GEODYN-WPP/WWP coupling is based on the structured adaptive-mesh-refinement infrastructure, SAMRAI, and has been used in FEMA table-top exercises conducted in 2013 and 2014, and more recently, the 2015 Planetary Defense Conference exercise. Results from these simulations provide an estimate of onshore effects and can inform more sophisticated inundation models. The capabilities of this methodology are illustrated by providing results for different impact locations, and an exploration of asteroid size on the waves arriving at the shoreline of area cities. We constructed the maximum and minimum envelops of water-wave heights given the size of the asteroid and the location of the impact along the risk corridor. Such profiles can inform emergency response and disaster-mitigation efforts, and may be used for design of maritime protection or assessment of risk to shoreline structures of interest. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-675390-DRAFT.
Characterization of electrophysiological propagation by multichannel sensors
Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.
2016-01-01
Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907
Performance of cellular frequency-hopped spread-spectrum radio networks
NASA Astrophysics Data System (ADS)
Gluck, Jeffrey W.; Geraniotis, Evaggelos
1989-10-01
Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.
Identifying medication error chains from critical incident reports: a new analytic approach.
Huckels-Baumgart, Saskia; Manser, Tanja
2014-10-01
Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.
U.S. Coast Guard SARSAT Final Evaluation Report. Volume II. Appendices.
DOT National Transportation Integrated Search
1987-03-01
Contents: Controlled Tests; Controlled Test Error Analysis, Processing of Westwind Data; Exercises and Homing Tests; Further Analysis of Controlled Tests; Sar Case Analysis Tables; Narratives of Real Distress Cases; RCC Response Scenarios; Workload A...
Orbit covariance propagation via quadratic-order state transition matrix in curvilinear coordinates
NASA Astrophysics Data System (ADS)
Hernando-Ayuso, Javier; Bombardelli, Claudio
2017-09-01
In this paper, an analytical second-order state transition matrix (STM) for relative motion in curvilinear coordinates is presented and applied to the problem of orbit uncertainty propagation in nearly circular orbits (eccentricity smaller than 0.1). The matrix is obtained by linearization around a second-order analytical approximation of the relative motion recently proposed by one of the authors and can be seen as a second-order extension of the curvilinear Clohessy-Wiltshire (C-W) solution. The accuracy of the uncertainty propagation is assessed by comparison with numerical results based on Monte Carlo propagation of a high-fidelity model including geopotential and third-body perturbations. Results show that the proposed STM can greatly improve the accuracy of the predicted relative state: the average error is found to be at least one order of magnitude smaller compared to the curvilinear C-W solution. In addition, the effect of environmental perturbations on the uncertainty propagation is shown to be negligible up to several revolutions in the geostationary region and for a few revolutions in low Earth orbit in the worst case.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
NASA Astrophysics Data System (ADS)
Butt, Ali
Crack propagation in a solid rocket motor environment is difficult to measure directly. This experimental and analytical study evaluated the viability of real-time radiography for detecting bore regression and propellant crack propagation speed. The scope included the quantitative interpretation of crack tip velocity from simulated radiographic images of a burning, center-perforated grain and actual real-time radiographs taken on a rapid-prototyped model that dynamically produced the surface movements modeled in the simulation. The simplified motor simulation portrayed a bore crack that propagated radially at a speed that was 10 times the burning rate of the bore. Comparing the experimental image interpretation with the calibrated surface inputs, measurement accuracies were quantified. The average measurements of the bore radius were within 3% of the calibrated values with a maximum error of 7%. The crack tip speed could be characterized with image processing algorithms, but not with the dynamic calibration data. The laboratory data revealed that noise in the transmitted X-Ray intensity makes sensing the crack tip propagation using changes in the centerline transmitted intensity level impractical using the algorithms employed.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
Sacco, Guillaume; Caillaud, Corinne; Ben Sadoun, Gregory; Robert, Philippe; David, Renaud; Brisswalter, Jeanick
2016-01-01
Epidemiological studies highlight the relevance of regular exercise interventions to enhance or maintain neurocognitive function in subjects with cognitive impairments. The aim of this study was to ascertain the effect of aerobic exercise associated with cognitive enrichment on cognitive performance in subjects with mild cognitive impairment (MCI). Eight participants with MCI (72 ± 2 years) were enrolled in a 9-month study that consisted of two 3-months experimental interventions separated by a training cessation period of 3 months. The interventions included either aerobic exercise alone or aerobic exercise combined with cognitive enrichment. The exercise program involved two 20-min cycling exercise bouts per week at an intensity corresponding to 60% of the heart rate reserve. Cognitive performance was assessed using a task of single reaction time (SRT) and an inhibition task (Go-no-Go) before, immediately after, and 1 month after each intervention. The exercise intervention improved the speed of responses during the Go-no-Go task without any increase in errors. This improvement was enhanced by cognitive enrichment (6 ± 1% ; p > 0.05), when compared with exercise alone (4 ± 0.5% ,). Following exercise cessation, this positive effect disappeared. No effect was observed on SRT performance. Regular aerobic exercise improved cognitive performance in MCI subjects and the addition of cognitive tasks during exercise potentiated this effect. However, the influence of aerobic exercise on cognitive performance did not persist after cessation of training. Studies involving a larger number of subjects are necessary to confirm these results.
NASA Astrophysics Data System (ADS)
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
NASA Astrophysics Data System (ADS)
Bhuiyan, M. A. E.; Nikolopoulos, E. I.; Anagnostou, E. N.
2017-12-01
Quantifying the uncertainty of global precipitation datasets is beneficial when using these precipitation products in hydrological applications, because precipitation uncertainty propagation through hydrologic modeling can significantly affect the accuracy of the simulated hydrologic variables. In this research the Iberian Peninsula has been used as the study area with a study period spanning eleven years (2000-2010). This study evaluates the performance of multiple hydrologic models forced with combined global rainfall estimates derived based on a Quantile Regression Forests (QRF) technique. In QRF technique three satellite precipitation products (CMORPH, PERSIANN, and 3B42 (V7)); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset are being utilized in this study. A high-resolution, ground-based observations driven precipitation dataset (named SAFRAN) available at 5 km/1 h resolution is used as reference. Through the QRF blending framework the stochastic error model produces error-adjusted ensemble precipitation realizations, which are used to force four global hydrological models (JULES (Joint UK Land Environment Simulator), WaterGAP3 (Water-Global Assessment and Prognosis), ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) and SURFEX (Stands for Surface Externalisée) ) to simulate three hydrologic variables (surface runoff, subsurface runoff and evapotranspiration). The models are forced with the reference precipitation to generate reference-based hydrologic simulations. This study presents a comparative analysis of multiple hydrologic model simulations for different hydrologic variables and the impact of the blending algorithm on the simulated hydrologic variables. Results show how precipitation uncertainty propagates through the different hydrologic model structures to manifest in reduction of error in hydrologic variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
Resolution of the COBE Earth sensor anomaly
NASA Technical Reports Server (NTRS)
Sedler, J.
1993-01-01
Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.
The algorithm study for using the back propagation neural network in CT image segmentation
NASA Astrophysics Data System (ADS)
Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi
2017-01-01
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.
GNSS-Reflectometry aboard ISS with GEROS: Investigation of atmospheric propagation effects
NASA Astrophysics Data System (ADS)
Zus, F.; Heise, S.; Wickert, J.; Semmling, M.
2015-12-01
GEROS-ISS (GNSS rEflectometry Radio Occultation and Scatterometry) is an ESA mission aboard the International Space Station (ISS). The main mission goals are the determination of the sea surface height and surface winds. Secondary goals are monitoring of land surface parameters and atmosphere sounding using GNSS radio occultation measurements. The international scientific study GARCA (GNSS-Reflectometry Assessment of Requirements and Consolidation of Retrieval Algorithms), funded by ESA, is part of the preparations for GEROS-ISS. Major goals of GARCA are the development of an end2end Simulator for the GEROS-ISS measurements (GEROS-SIM) and the evaluation of the error budget of the GNSS reflectometry measurements. In this presentation we introduce some of the GARCA activities to quantify the influence of the ionized and neutral atmosphere on the altimetric measurements, which is a major error source for GEROS-ISS. At first, we analyse, to which extend the standard linear combination of interferometric paths at different carrier frequencies can be used to correct for the ionospheric propagation effects. Second, we make use of the tangent-linear version of our ray-trace algorithm to propagate the uncertainty of the underlying refractivity profile into the uncertainty of the interferometric path. For comparison the sensitivity of the interferometric path with respect to the sea surface height is computed. Though our calculations are based on a number of simplifying assumptions (the Earth is a sphere, the atmosphere is spherically layered and the ISS and GNSS satellite orbits are circular) some general conclusions can be drawn. In essence, for elevation angles above -5° at the ISS the higher-order ionospheric errors and the uncertaintiy of the inteferometric path due to the uncertainty of the underlying refractivity profile are small enough to distinguish a sea surface height of ± 0.5 m.
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
Niemeyer, Kyle E.; Sung, Chih-Jen
2014-11-01
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
A new device to study isoload eccentric exercise.
Guilhem, Gaël; Cornu, Christophe; Nordez, Antoine; Guével, Arnaud
2010-12-01
This study was designed to develop a new device allowing mechanical analysis of eccentric exercise against a constant load, with a view in mind to compare isoload (IL) and isokinetic (IK) eccentric exercises. A plate-loaded resistance training device was integrated to an IK dynamometer, to perform the acquisition of mechanical parameters (i.e., external torque, angular velocity). To determine the muscular torque produced by the subject, load torque was experimentally measured (TLexp) at 11 different loads from 30° to 90° angle (0° = lever arm in horizontal position). TLexp was modeled to take friction effect and torque variations into account. Validity of modeled load torque (TLmod) was tested by determining the root mean square (RMS) error, bias, and 2SD between the descending part of TLexp (from 30° to 90°) and TLmod. Validity of TLexp was tested by a linear regression and a Passing-Bablok regression. A pilot analysis on 10 subjects was performed to determine the contribution of the torque because of the moment of inertia to the amount of external work (W). Results showed the validity of TLmod (bias = 0%; RMS error = 0.51%) and TLexp SEM = 4.1 N·m; Intraclass correlation coefficient (ICC) = 1.00; slope = 0.99; y-intercept = -0.13). External work calculation showed a satisfactory reproducibility (SEM = 38.3 J; ICC = 0.98) and moment of inertia contribution to W showed a low value (3.2 ± 2.0%). Results allow us to validate the new device developed in this study. Such a device could be used in future work to study IL eccentric exercise and to compare the effect of IL and IK eccentric exercises in standardized conditions.
Choi, Jin-Seung; Kang, Dong-Won; Seo, Jeong-Woo; Kim, Dae-Hyeok; Yang, Seung-Tae; Tack, Gye-Rae
2016-01-01
[Purpose] In this study, a program was developed for leg-strengthening exercises and balance assessment using Microsoft Kinect. [Subjects and Methods] The program consists of three leg-strengthening exercises (knee flexion, hip flexion, and hip extension) and the one-leg standing test (OLST). The program recognizes the correct exercise posture by comparison with the range of motion of the hip and knee joints and provides a number of correct action examples to improve training. The program measures the duration of the OLST and presents this as the balance-age. The accuracy of the program was analyzed using the data of five male adults. [Results] In terms of the motion recognition accuracy, the sensitivity and specificity were 95.3% and 100%, respectively. For the balance assessment, the time measured using the existing method with a stopwatch had an absolute error of 0.37 sec. [Conclusion] The developed program can be used to enable users to conduct leg-strengthening exercises and balance assessments at home.
Simulation of General Physics laboratory exercise
NASA Astrophysics Data System (ADS)
Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.
2015-01-01
Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.
VizieR Online Data Catalog: V and R CCD photometry of visual binaries (Abad+, 2004)
NASA Astrophysics Data System (ADS)
Abad, C.; Docobo, J. A.; Lanchares, V.; Lahulla, J. F.; Abelleira, P.; Blanco, J.; Alvarez, C.
2003-11-01
Table 1 gives relevant data for the visual binaries observed. Observations were carried out over a short period of time, therefore we assign the mean epoch (1998.58) for the totality of data. Data of individual stars are presented as average data with errors, by parameter, when various observations have been calculated, as well as the number of observations involved. Errors corresponding to astrometric relative positions between components are always present. For single observations, parameter fitting errors, specially for dx and dy parameters, have been calculated analysing the chi2 test around the minimum. Following the rules for error propagation, theta and rho errors can be estimated. Then, Table 1 shows single observation errors with an additional significant digit. When a star does not have known references, we include it in Table 2, where J2000 position and magnitudes are from the USNO-A2.0 catalogue (Monet et al., 1998, Cat. ). (2 data files).
ERIC Educational Resources Information Center
Davis, Richard A.
2015-01-01
A simple classroom exercise is used to teach students about the law of propagation of uncertainty in experimental measurements and analysis. Students calculate the density of a rectangular wooden block with a hole from several measurements of mass and length using a ruler and scale. The ruler and scale give students experience with estimating…
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Chakravarty, Abhijit
2011-06-01
Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.
Regional Variation in Use of Complementary Health Approaches by U.S. Adults
... part of their yoga exercise. Data sources and methods Data from the 2012 NHIS were used for ... sampling design of NHIS. The Taylor series linearization method was chosen for estimation of standard errors. Differences ...
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Mekid, Samir; Vacharanukul, Ketsaya
2006-01-01
To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
NASA Astrophysics Data System (ADS)
Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei
2017-05-01
The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.
Opar, David A; Piatkowski, Timothy; Williams, Morgan D; Shield, Anthony J
2013-09-01
Reliability and case-control injury study. To determine if a novel device designed to measure eccentric knee flexor strength via the Nordic hamstring exercise displays acceptable test-retest reliability; to determine normative values for eccentric knee flexor strength derived from the device in individuals without a history of hamstring strain injury (HSI); and to determine if the device can detect weakness in elite athletes with a previous history of unilateral HSI. HSI and reinjury are the most common cause of lost playing time in a number of sports. Eccentric knee flexor weakness is a major modifiable risk factor for future HSI. However, at present, there is a lack of easily accessible equipment to assess eccentric knee flexor strength. Thirty recreationally active males without a history of HSI completed the Nordic hamstring exercise on the device on 2 separate occasions. Intraclass correlation coefficients, typical error, typical error as a coefficient of variation, and minimal detectable change at a 95% confidence level were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed the Nordic hamstring exercise on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. The device displayed high to moderate reliability (intraclass correlation coefficient = 0.83-0.90; typical error, 21.7-27.5 N; typical error as a coefficient of variation, 5.8%-8.5%; minimal detectable change at a 95% confidence level, 60.1-76.2 N). Mean ± SD normative eccentric flexor strength in the uninjured group was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limb was 15% weaker than the contralateral uninjured limb (mean difference, 50.3 N; 95% confidence interval: 25.7, 74.9; P<.01), 15% weaker than the normative left limb (mean difference, 50.0 N; 95% confidence interval: 1.4, 98.5; P = .04), and 18% weaker than the normative right limb (mean difference, 66.5 N; 95% confidence interval: 18.0, 115.1; P<.01). The experimental device offers a reliable method to measure eccentric knee flexor strength and strength asymmetry and to detect residual weakness in previously injured elite athletes.
de Souza E Silva, Christina G; Kaminsky, Leonard A; Arena, Ross; Christle, Jeffrey W; Araújo, Claudio Gil S; Lima, Ricardo M; Ashley, Euan A; Myers, Jonathan
2018-05-01
Background Maximal oxygen uptake (VO 2 max) is a powerful predictor of health outcomes. Valid and portable reference values are integral to interpreting measured VO 2 max; however, available reference standards lack validation and are specific to exercise mode. This study was undertaken to develop and validate a single equation for normal standards for VO 2 max for the treadmill or cycle ergometer in men and women. Methods Healthy individuals ( N = 10,881; 67.8% men, 20-85 years) who performed a maximal cardiopulmonary exercise test on either a treadmill or a cycle ergometer were studied. Of these, 7617 and 3264 individuals were randomly selected for development and validation of the equation, respectively. A Brazilian sample (1619 individuals) constituted a second validation cohort. The prediction equation was determined using multiple regression analysis, and comparisons were made with the widely-used Wasserman and European equations. Results Age, sex, weight, height and exercise mode were significant predictors of VO 2 max. The regression equation was: VO 2 max (ml kg -1 min -1 ) = 45.2 - 0.35*Age - 10.9*Sex (male = 1; female = 2) - 0.15*Weight (pounds) + 0.68*Height (inches) - 0.46*Exercise Mode (treadmill = 1; bike = 2) ( R = 0.79, R 2 = 0.62, standard error of the estimate = 6.6 ml kg -1 min -1 ). Percentage predicted VO 2 max for the US and Brazilian validation cohorts were 102.8% and 95.8%, respectively. The new equation performed better than traditional equations, particularly among women and individuals ≥60 years old. Conclusion A combined equation was developed for normal standards for VO 2 max for different exercise modes derived from a US national registry. The equation provided a lower average error between measured and predicted VO 2 max than traditional equations even when applied to an independent cohort. Additional studies are needed to determine its portability.
Suárez Rodríguez, David; del Valle Soto, Miguel
2017-01-01
Background The aim of this study is to find the differences between two specific interval exercises. We begin with the hypothesis that the use of microintervals of work and rest allow for greater intensity of play and a reduction in fatigue. Methods Thirteen competition-level male tennis players took part in two interval training exercises comprising nine 2 min series, which consisted of hitting the ball with cross-court forehand and backhand shots, behind the service box. One was a high-intensity interval training (HIIT), made up of periods of continuous work lasting 2 min, and the other was intermittent interval training (IIT), this time with intermittent 2 min intervals, alternating periods of work with rest periods. Average heart rate (HR) and lactate levels were registered in order to observe the physiological intensity of the two exercises, along with the Borg Scale results for perceived exertion and the number of shots and errors in order to determine the intensity achieved and the degree of fatigue throughout the exercise. Results There were no significant differences in the average heart rate, lactate or the Borg Scale. Significant differences were registered, on the other hand, with a greater number of shots in the first two HIIT series (series 1 p>0.009; series 2 p>0.056), but not in the third. The number of errors was significantly lower in all the IIT series (series 1 p<0.035; series 2 p<0.010; series 3 p<0.001). Conclusion Our study suggests that high-intensity intermittent training allows for greater intensity of play in relation to the real time spent on the exercise, reduced fatigue levels and the maintaining of greater precision in specific tennis-related exercises. PMID:29021912
A virtual model of the bench press exercise.
Rahmani, Abderrahmane; Rambaud, Olivier; Bourdin, Muriel; Mariot, Jean-Pierre
2009-08-07
The objective of this study was to design and validate a three degrees of freedom model in the sagittal plane for the bench press exercise. The mechanical model was based on rigid segments connected by revolute and prismatic pairs, which enabled a kinematic approach and global force estimation. The method requires only three simple measurements: (i) horizontal position of the hand (x(0)); (ii) vertical displacement of the barbell (Z) and (iii) elbow angle (theta). Eight adult male throwers performed maximal concentric bench press exercises against different masses. The kinematic results showed that the vertical displacement of each segment and the global centre of mass followed the vertical displacement of the lifted mass. Consequently, the vertical velocity and acceleration of the combined centre of mass and the lifted mass were identical. Finally, for each lifted mass, there were no practical differences between forces calculated from the bench press model and those simultaneously measured with a force platform. The error was lower than 2.5%. The validity of the mechanical method was also highlighted by a standard error of the estimate (SEE) ranging from 2.0 to 6.6N in absolute terms, a coefficient of variation (CV) < or =0.8%, and a correlation between the two scores > or =0.99 for all the lifts (p<0.001). The method described here, which is based on three simple parameters, allows accurate evaluation of the force developed by the upper limb muscles during bench press exercises in both field and laboratory conditions.
Aliased tidal errors in TOPEX/POSEIDON sea surface height data
NASA Technical Reports Server (NTRS)
Schlax, Michael G.; Chelton, Dudley B.
1994-01-01
Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.
Ionospheric Impacts on UHF Space Surveillance
NASA Astrophysics Data System (ADS)
Jones, J. C.
2017-12-01
Earth's atmosphere contains regions of ionized plasma caused by the interaction of highly energetic solar radiation. This region of ionization is called the ionosphere and varies significantly with altitude, latitude, local solar time, season, and solar cycle. Significant ionization begins at about 100 km (E layer) with a peak in the ionization at about 300 km (F2 layer). Above the F2 layer, the atmosphere is mostly ionized but the ion and electron densities are low due to the unavailability of neutral molecules for ionization so the density decreases exponentially with height to well over 1000 km. The gradients of these variations in the ionosphere play a significant role in radio wave propagation. These gradients induce variations in the index of refraction and cause some radio waves to refract. The amount of refraction depends on the magnitude and direction of the electron density gradient and the frequency of the radio wave. The refraction is significant at HF frequencies (3-30 MHz) with decreasing effects toward the UHF (300-3000 MHz) range. UHF is commonly used for tracking of space objects in low Earth orbit (LEO). While ionospheric refraction is small for UHF frequencies, it can cause errors in range, azimuth angle, and elevation angle estimation by ground-based radars tracking space objects. These errors can cause significant errors in precise orbit determinations. For radio waves transiting the ionosphere, it is important to understand and account for these effects. Using a sophisticated radio wave propagation tool suite and an empirical ionospheric model, we calculate the errors induced by the ionosphere in a simulation of a notional space surveillance radar tracking objects in LEO. These errors are analyzed to determine daily, monthly, annual, and solar cycle trends. Corrections to surveillance radar measurements can be adapted from our simulation capability.
NASA Astrophysics Data System (ADS)
Vandewal, Anthony
1993-11-01
This paper provides an unclassified overview of the U.S. Army program that collects and disseminates information about the effects of battlefied smokes and obscurants on weapon system performance. The primary mechanism for collecting field data is an annual exercise called SMOKE WEEK. In SMOKE WEEK testing, a complete characterization is made of the ambient test conditions, of the electromagnetic radiation propagation in clear and obscured conditions, and of the obscuring cloud that the particles that comprise the cloud. This paper describes the instrumentation and methodology employed to make these field measurements, methods of analysis, and some typical results. The effects of these realistic battlefield environments on weapons system performance are discussed generically.
NASA Astrophysics Data System (ADS)
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.
Implementations of back propagation algorithm in ecosystems applications
NASA Astrophysics Data System (ADS)
Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed
2015-05-01
Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert ecosystem analyzer for many applications in ecological fields. The pilot ecosystem analyzer shows promising ability for generalization and requires further tuning and refinement of the basis neural network system for optimal performance.
A transition matrix approach to the Davenport gryo calibration scheme
NASA Technical Reports Server (NTRS)
Natanson, G. A.
1998-01-01
The in-flight gyro calibration scheme commonly used by NASA Goddard Space Flight Center (GSFC) attitude ground support teams closely follows an original version of the Davenport algorithm developed in the late seventies. Its basic idea is to minimize the least-squares differences between attitudes gyro- propagated over the course of a maneuver and those determined using post- maneuver sensor measurements. The paper represents the scheme in a recursive form by combining necessary partials into a rectangular matrix, which is propagated in exactly the same way as a Kalman filters square transition matrix. The nontrivial structure of the propagation matrix arises from the fact that attitude errors are not included in the state vector, and therefore their derivatives with respect to estimated a parameters do not appear in the transition matrix gyro defined in the conventional way. In cases when the required accuracy can be achieved by a single iteration, representation of the Davenport gyro calibration scheme in a recursive form allows one to discard each gyro measurement immediately after it was used to propagate the attitude and state transition matrix. Another advantage of the new approach is that it utilizes the same expression for the error sensitivity matrix as that used by the Kalman filter. As a result the suggested modification of the Davenport algorithm made it possible to reuse software modules implemented in the Kalman filter estimator, where both attitude errors and gyro calibration parameters are included in the state vector. The new approach has been implemented in the ground calibration utilities used to support the Tropical Rainfall Measuring Mission (TRMM). The paper analyzes some preliminary results of gyro calibration performed by the TRMM ground attitude support team. It is demonstrated that an effect of the second iteration on estimated values of calibration parameters is negligibly small, and therefore there is no need to store processed gyro data. This opens a promising opportunity for onboard implementation of the suggested recursive procedure by combining, it with the Kalman filter used to obtain necessary attitude solutions at the beginning and end of each maneuver.
Allegre, B; Therme, P
2008-10-01
Since the first writings on excessive exercise, there has been an increased interest in exercise dependence. One of the major consequences of this increased interest has been the development of several definitions and measures of exercise dependence. The work of Veale [Does primary exercise dependence really exist? In: Annet J, Cripps B, Steinberg H, editors. Exercise addiction: Motivation for participation in sport and exercise.Leicester, UK: Br Psychol Soc; 1995. p. 1-5.] provides an advance for the definition and measure of exercise dependence. These studies have adapted the DSM-IV criteria for substance dependence to measure exercise dependence. The Exercise Dependence Scale-Revised is based on these diagnostic criteria, which are: tolerance; withdrawal effects; intention effect; lack of control; time; reductions in other activities; continuance. Confirmatory factor analyses of EDS-R provided support to present a measurement model (21 items loaded in seven factors) of EDS-R (Comparative Fit Index=0.97; Root mean Square Error of Approximation=0.05; Tucker-Lewis Index=0.96). The aim of this study was to examine the psychometric properties of a French version of the EDS-R [Factorial validity and psychometric examination of the exercise dependence scale-revised. Meas Phys Educ Exerc Sci 2004;8(4):183-201.] to test the stability of the seven-factor model of the original version with a French population. A total of 516 half-marathoners ranged in age from 17 to 74 years old (Mean age=39.02 years, ET=10.64), with 402 men (77.9%) and 114 women (22.1%) participated in the study. The principal component analysis results in a six-factor structure, which accounts for 68.60% of the total variance. Because principal component analysis presents a six-factor structure differing from the original seven-factor structure, two models were tested, using confirmatory factor analysis. The first model is the seven-factor model of the original version of the EDS-R and the second is the model produced by the principal component analysis. The results of confirmatory factor analysis presented the original model (with a seven-factor structure) as a good model and fit indices were good (X(2)/ddl=2.89, Root Mean Square Error of Approximation (RMSEA)=0.061, Expected Cross Validation Index (ECVI)=1.20, Goodness-of-Fit Index (GFI)=0.92, Comparative Fit Index (CFI)=0.94, Standardized Root Mean Square (SRMS)=0.048). These results showed that the French version of EDS-R has an identical factor structure to the original. Therefore, the French version of EDS-R was an acceptable scale to measure exercise dependence and can be used on a French population.
Uncertainty Analysis in Large Area Aboveground Biomass Mapping
NASA Astrophysics Data System (ADS)
Baccini, A.; Carvalho, L.; Dubayah, R.; Goetz, S. J.; Friedl, M. A.
2011-12-01
Satellite and aircraft-based remote sensing observations are being more frequently used to generate spatially explicit estimates of aboveground carbon stock of forest ecosystems. Because deforestation and forest degradation account for circa 10% of anthropogenic carbon emissions to the atmosphere, policy mechanisms are increasingly recognized as a low-cost mitigation option to reduce carbon emission. They are, however, contingent upon the capacity to accurately measures carbon stored in the forests. Here we examine the sources of uncertainty and error propagation in generating maps of aboveground biomass. We focus on characterizing uncertainties associated with maps at the pixel and spatially aggregated national scales. We pursue three strategies to describe the error and uncertainty properties of aboveground biomass maps, including: (1) model-based assessment using confidence intervals derived from linear regression methods; (2) data-mining algorithms such as regression trees and ensembles of these; (3) empirical assessments using independently collected data sets.. The latter effort explores error propagation using field data acquired within satellite-based lidar (GLAS) acquisitions versus alternative in situ methods that rely upon field measurements that have not been systematically collected for this purpose (e.g. from forest inventory data sets). A key goal of our effort is to provide multi-level characterizations that provide both pixel and biome-level estimates of uncertainties at different scales.
Quantifying radar-rainfall uncertainties in urban drainage flow modelling
NASA Astrophysics Data System (ADS)
Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.
2015-09-01
This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.
Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning
NASA Astrophysics Data System (ADS)
Bradley, Ben K.
Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and orbit propagation, yielding savings in computation time and memory. Orbit propagation and position transformation simulations are analyzed to generate a complete set of recommendations for performing the ITRS/GCRS transformation for a wide range of needs, encompassing real-time on-board satellite operations and precise post-processing applications. In addition, a complete derivation of the ITRS/GCRS frame transformation time-derivative is detailed for use in velocity transformations between the GCRS and ITRS and is applied to orbit propagation in the rotating ITRS. EOP interpolation methods and ocean tide corrections are shown to impact the ITRS/GCRS transformation accuracy at the level of 5 cm and 20 cm on the surface of the Earth and at the Global Positioning System (GPS) altitude, respectively. The precession-nutation and EOP simplifications yield maximum propagation errors of approximately 2 cm and 1 m after 15 minutes and 6 hours in low-Earth orbit (LEO), respectively, while reducing computation time and memory usage. Finally, for orbit propagation in the ITRS, a simplified scheme is demonstrated that yields propagation errors under 5 cm after 15 minutes in LEO. This approach is beneficial for orbit determination based on GPS measurements. We conclude with a summary of recommendations on EOP usage and bias-precession-nutation implementations for achieving a wide range of transformation and propagation accuracies at several altitudes. This comprehensive set of recommendations allows satellite operators, astrodynamicists, and scientists to make informed decisions when choosing the best implementation for their application, balancing accuracy and computational complexity.
Computational fluid dynamics simulation of sound propagation through a blade row.
Zhao, Lei; Qiao, Weiyang; Ji, Liang
2012-10-01
The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.
Attitude Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.
NASA Astrophysics Data System (ADS)
Fort, Joaquim
2011-05-01
It is shown that Lotka-Volterra interaction terms are not appropriate to describe vertical cultural transmission. Appropriate interaction terms are derived and used to compute the effect of vertical cultural transmission on demic front propagation. They are also applied to a specific example, the Neolithic transition in Europe. In this example, it is found that the effect of vertical cultural transmission can be important (about 30%). On the other hand, simple models based on differential equations can lead to large errors (above 50%). Further physical, biophysical, and cross-disciplinary applications are outlined.
Scheme for Terminal Guidance Utilizing Acousto-Optic Correlator.
longitudinally extending acousto - optic device as index of refraction variation pattern signals. Real time signals corresponding to the scene actually being viewed...by the vehicle are propagated across the stored signals, and the results of an acousto - optic correlation are utilized to determine X and Y error
Precipitation is a key control on watershed hydrologic modelling output, with errors in rainfall propagating through subsequent stages of water quantity and quality analysis. Most watershed models incorporate precipitation data from rain gauges; higher-resolution data sources are...
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.
2015-12-02
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less
NASA Astrophysics Data System (ADS)
Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.
2011-10-01
Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P
2015-12-14
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
On the use of the covariance matrix to fit correlated data
NASA Astrophysics Data System (ADS)
D'Agostini, G.
1994-07-01
Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.
1990-01-01
Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.
1989-04-13
19 5.3 The Solution, BSM2 , BSM3 . ...................................... 21 6. Description of test example...are modified for the boundary conditions. The sections on the preprocessor subroutine BSM1 and the solution subroutines BSM2 , BSM3 may be skipped by...interior row j = N-1 to the solution error C5 on the second row j = IE(2) of the last block, so that P3 = C5 R31 (5.18) 20 5.3 The Solution. BSM2
A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Reale, O.; Atlas, R.; Jusem, J. C.
2004-01-01
Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
NASA Technical Reports Server (NTRS)
Buglia, James J.
1989-01-01
An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.
Finite element modeling of light propagation in fruit under illumination of continuous-wave beam
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
A cycling workstation to facilitate physical activity in office settings.
Elmer, Steven J; Martin, James C
2014-07-01
Facilitating physical activity during the workday may help desk-bound workers reduce risks associated with sedentary behavior. We 1) evaluated the efficacy of a cycling workstation to increase energy expenditure while performing a typing task and 2) fabricated a power measurement system to determine the accuracy and reliability of an exercise cycle. Ten individuals performed 10 min trials of sitting while typing (SIT type) and pedaling while typing (PED type). Expired gases were recorded and typing performance was assessed. Metabolic cost during PED type was ∼ 2.5 × greater compared to SIT type (255 ± 14 vs. 100 ± 11 kcal h(-1), P < 0.01). Typing time and number of typing errors did not differ between PED type and SIT type (7.7 ± 1.5 vs. 7.6 ± 1.6 min, P = 0.51, 3.3 ± 4.6 vs. 3.8 ± 2.7 errors, P = 0.80). The exercise cycle overestimated power by 14-138% compared to actual power but actual power was reliable (r = 0.998, P < 0.01). A cycling workstation can facilitate physical activity without compromising typing performance. The exercise cycle's inaccuracy could be misleading to users. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Seismo-Acoustic Numerical Investigation of Land Impacts, Water Impacts, or Air Bursts of Asteroids
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.; Miller, P. L.; Dearborn, D. S.
2016-12-01
The annual probability of an asteroid impact is low, but over time, such catastrophic events are inevitable. Interest in assessing the impact consequences has led us to develop a physics-based framework to seamlessly simulate the event from entry to impact, including air, water and ground shock propagation and wave generation. The non-linear effects are simulated using the hydrodynamics code GEODYN. As effects propagate outward, they become a wave source for the linear-elastic-wave propagation code and simulated using SAW or SWWP, depends on whether the asteroid impacts the land or the ocean, respectively. The GEODYN-SAW-SWWP coupling is based on the structured adaptive-mesh-refinement infrastructure, SAMRAI, and has been used in FEMA table-top exercises conducted in 2013 and 2014, and more recently, the 2015 Planetary Defense Conference exercise. Moreover, during atmospheric entry, asteroids create an acoustic trace that could be used to infer several physical characteristics of asteroid itself. Using SAW we explore the physical space parameters in order to rank the most important characteristics; Results from these simulations provide an estimate of onshore and offshore effects and can inform more sophisticated inundation and structural models. The capabilities of this methodology are illustrated by providing results for different impact locations, and an exploration of asteroid size on the waves arriving at the shoreline of area cities. We constructed the maximum and minimum envelops of water-wave heights or acceleration spectra given the size of the asteroid and the location of the impact along the risk corridor. Such profiles can inform emergency response and disaster-mitigation efforts. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Seismo-Acoustic Numerical Investigation of Land Impacts, Water Impacts, or Air Bursts of Asteroids
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.; Dearborn, D. S.; Miller, P. L.
2017-12-01
The annual probability of an asteroid impact is low, but over time, such catastrophic events are inevitable. Interest in assessing the impact consequences has led us to develop a physics-based framework to seamlessly simulate the event from entry to impact, including air, water and ground shock propagation and wave generation. The non-linear effects are simulated using the hydrodynamics code GEODYN. As effects propagate outward, they become a wave source for the linear-elastic-wave propagation code and simulated using SAW or SWWP, depends on whether the asteroid impacts the land or the ocean, respectively. The GEODYN-SAW-SWWP coupling is based on the structured adaptive-mesh-refinement infrastructure, SAMRAI, and has been used in FEMA table-top exercises conducted in 2013 and 2014, and more recently, the 2015 Planetary Defense Conference exercise. Moreover, during atmospheric entry, asteroids create an acoustic trace that could be used to infer several physical characteristics of asteroid itself. Using SAW we explore the physical space parameters in order to rank the most important characteristics; Results from these simulations provide an estimate of onshore and offshore effects and can inform more sophisticated inundation and structural models. The capabilities of this methodology are illustrated by providing results for different impact locations, and an exploration of asteroid size on the waves arriving at the shoreline of area cities. We constructed the maximum and minimum envelops of water-wave heights or acceleration spectra given the size of the asteroid and the location of the impact along the risk corridor. Such profiles can inform emergency response and disaster-mitigation efforts. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Validity of Wearable Activity Monitors during Cycling and Resistance Exercise.
Boudreaux, Benjamin D; Hebert, Edward P; Hollander, Daniel B; Williams, Brian M; Cormier, Corinne L; Naquin, Mildred R; Gillan, Wynn W; Gusew, Emily E; Kraemer, Robert R
2018-03-01
The use of wearable activity monitors has seen rapid growth; however, the mode and intensity of exercise could affect the validity of heart rate (HR) and caloric (energy) expenditure (EE) readings. There is a lack of data regarding the validity of wearable activity monitors during graded cycling regimen and a standard resistance exercise. The present study determined the validity of eight monitors for HR compared with an ECG and seven monitors for EE compared with a metabolic analyzer during graded cycling and resistance exercise. Fifty subjects (28 women, 22 men) completed separate trials of graded cycling and three sets of four resistance exercises at a 10-repetition-maximum load. Monitors included the following: Apple Watch Series 2, Fitbit Blaze, Fitbit Charge 2, Polar H7, Polar A360, Garmin Vivosmart HR, TomTom Touch, and Bose SoundSport Pulse (BSP) headphones. HR was recorded after each cycling intensity and after each resistance exercise set. EE was recorded after both protocols. Validity was established as having a mean absolute percent error (MAPE) value of ≤10%. The Polar H7 and BSP were valid during both exercise modes (cycling: MAPE = 6.87%, R = 0.79; resistance exercise: MAPE = 6.31%, R = 0.83). During cycling, the Apple Watch Series 2 revealed the greatest HR validity (MAPE = 4.14%, R = 0.80). The BSP revealed the greatest HR accuracy during resistance exercise (MAPE = 6.24%, R = 0.86). Across all devices, as exercise intensity increased, there was greater underestimation of HR. No device was valid for EE during cycling or resistance exercise. HR from wearable devices differed at different exercise intensities; EE estimates from wearable devices were inaccurate. Wearable devices are not medical devices, and users should be cautious when using these devices for monitoring physiological responses to exercise.
Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.
Pinton, Gianmarco F
2017-03-01
Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.
Evaluation of a scale-model experiment to investigate long-range acoustic propagation
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.
1987-01-01
Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.
Finite-difference time-domain synthesis of infrasound propagation through an absorbing atmosphere.
de Groot-Hedlin, C
2008-09-01
Equations applicable to finite-difference time-domain (FDTD) computation of infrasound propagation through an absorbing atmosphere are derived and examined in this paper. It is shown that over altitudes up to 160 km, and at frequencies relevant to global infrasound propagation, i.e., 0.02-5 Hz, the acoustic absorption in dB/m varies approximately as the square of the propagation frequency plus a small constant term. A second-order differential equation is presented for an atmosphere modeled as a compressible Newtonian fluid with low shear viscosity, acted on by a small external damping force. It is shown that the solution to this equation represents pressure fluctuations with the attenuation indicated above. Increased dispersion is predicted at altitudes over 100 km at infrasound frequencies. The governing propagation equation is separated into two partial differential equations that are first order in time for FDTD implementation. A numerical analysis of errors inherent to this FDTD method shows that the attenuation term imposes additional stability constraints on the FDTD algorithm. Comparison of FDTD results for models with and without attenuation shows that the predicted transmission losses for the attenuating media agree with those computed from synthesized waveforms.
Exercise-based cardiac rehabilitation for adults after heart valve surgery.
Sibilitz, Kirstine L; Berg, Selina K; Tang, Lars H; Risom, Signe S; Gluud, Christian; Lindschou, Jane; Kober, Lars; Hassager, Christian; Taylor, Rod S; Zwisler, Ann-Dorthe
2016-03-21
Exercise-based cardiac rehabilitation may benefit heart valve surgery patients. We conducted a systematic review to assess the evidence for the use of exercise-based intervention programmes following heart valve surgery. To assess the benefits and harms of exercise-based cardiac rehabilitation compared with no exercise training intervention, or treatment as usual, in adults following heart valve surgery. We considered programmes including exercise training with or without another intervention (such as a psycho-educational component). We searched: the Cochrane Central Register of Controlled Trials (CENTRAL); the Database of Abstracts of Reviews of Effects (DARE); MEDLINE (Ovid); EMBASE (Ovid); CINAHL (EBSCO); PsycINFO (Ovid); LILACS (Bireme); and Conference Proceedings Citation Index-S (CPCI-S) on Web of Science (Thomson Reuters) on 23 March 2015. We handsearched Web of Science, bibliographies of systematic reviews and trial registers (ClinicalTrials.gov, Controlled-trials.com, and The World Health Organization International Clinical Trials Registry Platform). We included randomised clinical trials that investigated exercise-based interventions compared with no exercise intervention control. The trial participants comprised adults aged 18 years or older who had undergone heart valve surgery for heart valve disease (from any cause) and received either heart valve replacement, or heart valve repair. Two authors independently extracted data. We assessed the risk of systematic errors ('bias') by evaluation of bias risk domains. Clinical and statistical heterogeneity were assessed. Meta-analyses were undertaken using both fixed-effect and random-effects models. We used the GRADE approach to assess the quality of evidence. We sought to assess the risk of random errors with trial sequential analysis. We included two trials from 1987 and 2004 with a total 148 participants who have had heart valve surgery. Both trials had a high risk of bias.There was insufficient evidence at 3 to 6 months follow-up to judge the effect of exercise-based cardiac rehabilitation compared to no exercise on mortality (RR 4.46 (95% confidence interval (CI) 0.22 to 90.78); participants = 104; studies = 1; quality of evidence: very low) and on serious adverse events (RR 1.15 (95% CI 0.37 to 3.62); participants = 148; studies = 2; quality of evidence: very low). Included trials did not report on health-related quality of life (HRQoL), and the secondary outcomes of New York Heart Association class, left ventricular ejection fraction and cost. We did find that, compared with control (no exercise), exercise-based rehabilitation may increase exercise capacity (SMD -0.47, 95% CI -0.81 to -0.13; participants = 140; studies = 2, quality of evidence: moderate). There was insufficient evidence at 12 months follow-up for the return to work outcome (RR 0.55 (95% CI 0.19 to 1.56); participants = 44; studies = 1; quality of evidence: low). Due to limited information, trial sequential analysis could not be performed as planned. Our findings suggest that exercise-based rehabilitation for adults after heart valve surgery, compared with no exercise, may improve exercise capacity. Due to a lack of evidence, we cannot evaluate the impact on other outcomes. Further high-quality randomised clinical trials are needed in order to assess the impact of exercise-based rehabilitation on patient-relevant outcomes, including mortality and quality of life.
Benefits of regular aerobic exercise for executive functioning in healthy populations.
Guiney, Hayley; Machado, Liana
2013-02-01
Research suggests that regular aerobic exercise has the potential to improve executive functioning, even in healthy populations. The purpose of this review is to elucidate which components of executive functioning benefit from such exercise in healthy populations. In light of the developmental time course of executive functions, we consider separately children, young adults, and older adults. Data to date from studies of aging provide strong evidence of exercise-linked benefits related to task switching, selective attention, inhibition of prepotent responses, and working memory capacity; furthermore, cross-sectional fitness data suggest that working memory updating could potentially benefit as well. In young adults, working memory updating is the main executive function shown to benefit from regular exercise, but cross-sectional data further suggest that task-switching and post error performance may also benefit. In children, working memory capacity has been shown to benefit, and cross-sectional data suggest potential benefits for selective attention and inhibitory control. Although more research investigating exercise-related benefits for specific components of executive functioning is clearly needed in young adults and children, when considered across the age groups, ample evidence indicates that regular engagement in aerobic exercise can provide a simple means for healthy people to optimize a range of executive functions.
Wittink, Harriet; Verschuren, Olaf; Terwee, Caroline; de Groot, Janke; Kwakkel, Gert; van de Port, Ingrid
2017-11-21
To systematically review and critically appraise the literature on measurement properties of cardiopulmonary exercise test protocols for measuring aerobic capacity, VO2max, in persons after stroke. PubMed, Embase and Cinahl were searched from inception up to 15 June 2016. A total of 9 studies were identified reporting on 9 different cardiopulmonary exercise test protocols. VO2max measured with cardiopulmonary exercise test and open spirometry was the construct of interest. The target population was adult persons after stroke. We included all studies that evaluated reliability, measurement error, criterion validity, content validity, hypothesis testing and/or responsiveness of cardiopulmonary exercise test protocols. Two researchers independently screened the literature, assessed methodological quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist and extracted data on measurement properties of cardiopulmonary exercise test protocols. Most studies reported on only one measurement property. Best-evidence synthesis was derived taking into account the methodological quality of the studies, the results and the consistency of the results. No judgement could be made on which protocol is "best" for measuring VO2max in persons after stroke due to lack of high-quality studies on the measurement properties of the cardiopulmonary exercise test.
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Study of a co-designed decision feedback equalizer, deinterleaver, and decoder
NASA Technical Reports Server (NTRS)
Peile, Robert E.; Welch, Loyd
1990-01-01
A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.
Application of Adjoint Methodology in Various Aspects of Sonic Boom Design
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.
2014-01-01
One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.
NASA Technical Reports Server (NTRS)
Snow, Frank; Harman, Richard; Garrick, Joseph
1988-01-01
The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.
Hanson, Sonya M.; Ekins, Sean; Chodera, John D.
2015-01-01
All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597
[Rehabilitation after anterior cruciate ligament suturing].
Andrtová, M; Chlupatá, I
1994-01-01
The authors discuss problems of rehabilitation after suture of the anterior cruciate ligament where frequently errors are committed and where inadequate rehabilitation may cause damage to the patient. Different periods of rehabilitation after LCA sutures are discussed and suitable methods of exercise for different periods are recommended.
A European Navy: Can it Complete European Political and Economic Integration?
2012-06-01
would enjoy a comprehensive and complete defense structure, perhaps even more effective and cost-efficient than is possible with the current lineup ...malfunctions or operator error. A unified and unifying European Navy would standardize basic equipment and procedures , facilitating effective exercises
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Unit Testing for the Application Control Language (ACL) Software
NASA Technical Reports Server (NTRS)
Heinich, Christina Marie
2014-01-01
In the software development process, code needs to be tested before it can be packaged for release in order to make sure the program actually does what it says is supposed to happen as well as to check how the program deals with errors and edge cases (such as negative or very large numbers). One of the major parts of the testing process is unit testing, where you test specific units of the code to make sure each individual part of the code works. This project is about unit testing many different components of the ACL software and fixing any errors encountered. To do this, mocks of other objects need to be created and every line of code needs to be exercised to make sure every case is accounted for. Mocks are important to make because it gives direct control of the environment the unit lives in instead of attempting to work with the entire program. This makes it easier to achieve the second goal of exercising every line of code.
Quilici, Belczak Cleusa Ema; Gildo, Cavalheri; de Godoy, Jose Maria Pereira; Quilici, Belczak Sergio; Augusto, Caffaro Roberto
2009-01-01
Aim The aim of this work was to compare the reduction in edema obtained in the conservative treatment of phlebopathies after resting and after performing a muscle exercise program in the Trendelenburg position. Methods Twenty-eight limbs of 24 patients with venous edema of distinct etiologies and classified as between C3 and C5 using CEAP classification. Volumetric evaluation by water displacement was carried out before and after resting in the Trendelenburg position and after performing programmed muscle exercises 24 hours later under identical conditions of time, position and temperature. For the statistical analysis the paired t-test was used with an alpha error of 5% being considered acceptable. Results The average total volume of the lower limbs was 3,967.46 mL. The mean reduction in edema obtained after resting was 92.9 mL, and after exercises it was 135.4 mL, giving a statistically significant difference (p-value = 0.0007). Conclusion In conclusion, exercises are more efficient to reduce the edema of lower limbs than resting in the Trendelenburg position. PMID:19602249
Kim, Hyeon-Ki; Konishi, Masayuki; Takahashi, Masaki; Tabata, Hiroki; Endo, Naoya; Numao, Shigeharu; Lee, Sun-Kyoung; Kim, Young-Hak; Suzuki, Katsuhiko; Sakamoto, Shizuo
2015-01-01
Purpose To compare the effects of endurance exercise performed in the morning and evening on inflammatory cytokine responses in young men. Methods Fourteen healthy male participants aged 24.3 ± 0.8 years (mean ± standard error) performed endurance exercise in the morning (0900–1000 h) on one day and then in the evening (1700–1800 h) on another day with an interval of at least 1 week between each trial. In both the morning and evening trials, the participants walked for 60 minutes at approximately 60% of the maximal oxygen uptake (V·O2max) on a treadmill. Blood samples were collected to determine hormones and inflammatory cytokines at pre-exercise, immediately post exercise, and 2 h post exercise. Results Plasma interleukin (IL)-6 and adrenaline concentrations were significantly higher immediately after exercise in the evening trial than in the morning trial (P < 0.01, both). Serum free fatty acids concentrations were significantly higher in the evening trial than in the morning trial at 2 h after exercise (P < 0.05). Furthermore, a significant correlation was observed between the levels of IL-6 immediately post-exercise and free fatty acids 2 h post-exercise in the evening (r = 0.68, P < 0.01). Conclusions These findings suggest that the effect of acute endurance exercise in the evening enhances the plasma IL-6 and adrenaline concentrations compared to that in the morning. In addition, IL-6 was involved in increasing free fatty acids, suggesting that the evening is more effective for exercise-induced lipolysis compared with the morning. PMID:26352938
NASA Technical Reports Server (NTRS)
Mohler, L. R.; Styf, J. R.; Pedowitz, R. A.; Hargens, A. R.; Gershuni, D. H.
1997-01-01
Currently, the definitive diagnosis of chronic compartment syndrome is based on invasive measurements of intracompartmental pressure. We measured the intramuscular pressure and the relative oxygenation in the anterior compartment of the leg in eighteen patients who were suspected of having chronic compartment syndrome as well as in ten control subjects before, during, and after exercise. Chronic compartment syndrome was considered to be present if the intramuscular pressure was at least fifteen millimeters of mercury (2.00 kilopascals) before exercise, at least thirty millimeters of mercury (4.00 kilopascals) one minute after exercise, or at least twenty millimeters of mercury (2.67 kilopascals) five minutes after exercise. Changes in relative oxygenation were measured with use of the non-invasive method of near-infrared spectroscopy. In all patients and subjects, there was rapid relative deoxygenation after the initiation of exercise, the level of oxygenation remained relatively stable during continued exercise, and there was reoxygenation to a level that exceeded the pre-exercise resting level after the cessation of exercise. During exercise, maximum relative deoxygenation in the patients who had chronic compartment syndrome (mean relative deoxygenation [and standard error], -290 +/- 39 millivolts) was significantly greater than that in the patients who did not have chronic compartment syndrome (-190 +/- 10 millivolts) and that in the control subjects (-179 +/- 14 millivolts) (p < 0.05 for both comparisons). In addition, the interval between the cessation of exercise and the recovery of the pre-exercise resting level of oxygenation was significantly longer for the patients who had chronic compartment syndrome (184 +/- 54 seconds) than for the patients who did not have chronic compartment syndrome (39 +/- 19 seconds) and the control subjects (33 +/- 10 seconds) (p < 0.05 for both comparisons).
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Error Model and Compensation of Bell-Shaped Vibratory Gyro
Su, Zhong; Liu, Ning; Li, Qing
2015-01-01
A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593
Radial orbit error reduction and sea surface topography determination using satellite altimetry
NASA Technical Reports Server (NTRS)
Engelis, Theodossios
1987-01-01
A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.
Comprehensive analysis of a medication dosing error related to CPOE.
Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L
2005-01-01
This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.
Duncan, Michael J; Smith, Mike; Bryant, Elizabeth; Eyre, Emma; Cook, Kathryn; Hankey, Joanne; Tallis, Jason; Clarke, Neil; Jones, Marc V
2016-01-01
The aim of this study was to investigate if the effects of changes in physiological arousal on timing performance can be accurately predicted by the catastrophe model. Eighteen young adults (8 males, 10 females) volunteered to participate in the study following ethical approval. After familiarisation, coincidence anticipation was measured using the Bassin Anticipation Timer under four incremental exercise conditions: Increasing exercise intensity and low cognitive anxiety, increasing exercise intensity and high cognitive anxiety, decreasing exercise intensity and low cognitive anxiety and decreasing exercise intensity and high cognitive anxiety. Incremental exercise was performed on a treadmill at intensities of 30%, 50%, 70% and 90% heart rate reserve (HRR) respectively. Ratings of cognitive anxiety were taken at each intensity using the Mental Readiness Form 3 (MRF3) followed by performance of coincidence anticipation trials at speeds of 3 and 8 mph. Results indicated significant condition × intensity interactions for absolute error (AE; p = .0001) and MRF cognitive anxiety intensity scores (p = .05). Post hoc analysis indicated that there were no statistically significant differences in AE across exercise intensities in low-cognitive anxiety conditions. In high-cognitive anxiety conditions, timing performance AE was significantly poorer and cognitive anxiety higher at 90% HRR, compared to the other exercise intensities. There was no difference in timing responses at 90% HRR during competitive trials, irrespective of whether exercise intensity was increasing or decreasing. This study suggests that anticipation timing performance is negatively affected when physiological arousal and cognitive anxiety are high.
Fort, Joaquim
2011-05-01
It is shown that Lotka-Volterra interaction terms are not appropriate to describe vertical cultural transmission. Appropriate interaction terms are derived and used to compute the effect of vertical cultural transmission on demic front propagation. They are also applied to a specific example, the Neolithic transition in Europe. In this example, it is found that the effect of vertical cultural transmission can be important (about 30%). On the other hand, simple models based on differential equations can lead to large errors (above 50%). Further physical, biophysical, and cross-disciplinary applications are outlined. © 2011 American Physical Society
Brooks, Johnell; Kellett, Julie; Seeanner, Julia; Jenkins, Casey; Buchanan, Caroline; Kinsman, Anne; Kelly, Desmond; Pierce, Susan
2016-07-01
The purpose of this study was to investigate the utility of using a driving simulator to address the motor aspects of pre-driving skills with young adults with Autism Spectrum Disorder (ASD). A group of neurotypical control participants and ten participants with ASD completed 18 interactive steering and pedal exercises with the goal to achieve error-free performance. Most participants were able to achieve this goal within five trials for all exercises except for the two most difficult ones. Minimal performance differences were observed between the two groups. Participants with ASD needed more time to complete the tasks. Overall, the interactive exercises and the process used worked well to address motor related aspects of pre-driving skills in young adults with ASD.
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Rolland, Jannick P.; Grygotis, Emma; Wayson, Sarah; Helguera, Maria; Dalecki, Diane; Parker, Kevin J.
2018-02-01
Determining the mechanical properties of tissue such as elasticity and viscosity is fundamental for better understanding and assessment of pathological and physiological processes. Dynamic optical coherence elastography uses shear/surface wave propagation to estimate frequency-dependent wave speed and Young's modulus. However, for dispersive tissues, the displacement pulse is highly damped and distorted during propagation, diminishing the effectiveness of peak tracking approaches. The majority of methods used to determine mechanical properties assume a rheological model of tissue for the calculation of viscoelastic parameters. Further, plane wave propagation is sometimes assumed which contributes to estimation errors. To overcome these limitations, we invert a general wave propagation model which incorporates (1) the initial force shape of the excitation pulse in the space-time field, (2) wave speed dispersion, (3) wave attenuation caused by the material properties of the sample, (4) wave spreading caused by the outward cylindrical propagation of the wavefronts, and (5) the rheological-independent estimation of the dispersive medium. Experiments were conducted in elastic and viscous tissue-mimicking phantoms by producing a Gaussian push using acoustic radiation force excitation, and measuring the wave propagation using a swept-source frequency domain optical coherence tomography system. Results confirm the effectiveness of the inversion method in estimating viscoelasticity in both the viscous and elastic phantoms when compared to mechanical measurements. Finally, the viscoelastic characterization of collagen hydrogels was conducted. Preliminary results indicate a relationship between collagen concentration and viscoelastic parameters which is important for tissue engineering applications.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
Application of neural nets in structural optimization
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hajela, Prabhat
1993-01-01
The biological motivation for Artificial Neural Net developments is briefly discussed, and the most popular paradigm, the feedforward supervised learning net with error back propagation training algorithm, is introduced. Possible approaches for utilization in structural optimization is illustrated through simple examples. Other currently ongoing developments for application in structural mechanics are also mentioned.
Dual Accelerometer Usage Strategy for Onboard Space Navigation
NASA Technical Reports Server (NTRS)
Zanetti, Renato; D'Souza, Chris
2012-01-01
This work introduces a dual accelerometer usage strategy for onboard space navigation. In the proposed algorithm the accelerometer is used to propagate the state when its value exceeds a threshold and it is used to estimate its errors otherwise. Numerical examples and comparison to other accelerometer usage schemes are presented to validate the proposed approach.
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
The ability to effectively use remotely sensed data for environmental spatial analysis is dependent on understanding the underlying procedures and associated variances attributed to the data processing and image analysis technique. Equally important, also, is understanding the er...
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
NASA Astrophysics Data System (ADS)
Oyler, Benjamin L.; Khan, Mohd M.; Smith, Donald F.; Harberts, Erin M.; Kilgour, David P. A.; Ernst, Robert K.; Cross, Alan S.; Goodlett, David R.
2018-04-01
In the preceding article "Top Down Tandem Mass Spectrometric Analysis of a Chemically Modified Rough-Type Lipopolysaccharide Vaccine Candidate" by Oyler et al., an error in the J5 E. coli LPS chemical structure (Figs. 2 and 4) was introduced and propagated into the final revision.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
Vemić, Ana; Rakić, Tijana; Malenović, Anđelija; Medenica, Mirjana
2015-01-01
The aim of this paper is to present a development of liquid chromatographic method when chaotropic salts are used as mobile phase additives following the QbD principles. The effect of critical process parameters (column chemistry, salt nature and concentration, acetonitrile content and column temperature) on the critical quality attributes (retention of the first and last eluting peak and separation of the critical peak pairs) was studied applying the design of experiments-design space methodology (DoE-DS). D-optimal design is chosen in order to simultaneously examine both categorical and numerical factors in minimal number of experiments. Two ways for the achievement of quality assurance were performed and compared. Namely, the uncertainty originating from the models was assessed by Monte Carlo simulations propagating the error equal to the variance of the model residuals and propagating the error originating from the model coefficients' calculation. The baseline separation of pramipexole and its five impurities is achieved fulfilling all the required criteria while the method validation proved its reliability. Copyright © 2014 Elsevier B.V. All rights reserved.
Stable lattice Boltzmann model for Maxwell equations in media
NASA Astrophysics Data System (ADS)
Hauser, A.; Verhey, J. L.
2017-12-01
The present work shows a method for stable simulations via the lattice Boltzmann (LB) model for electromagnetic waves (EM) transiting homogeneous media. LB models for such media were already presented in the literature, but they suffer from numerical instability when the media transitions are sharp. We use one of these models in the limit of pure vacuum derived from Liu and Yan [Appl. Math. Model. 38, 1710 (2014), 10.1016/j.apm.2013.09.009] and apply an extension that treats the effects of polarization and magnetization separately. We show simulations of simple examples in which EM waves travel into media to quantify error scaling, stability, accuracy, and time scaling. For conductive media, we use the Strang splitting and check the simulations accuracy at the example of the skin effect. Like pure EM propagation, the error for the static limits, which are constructed with a current density added in a first-order scheme, can be less than 1 % . The presented method is an easily implemented alternative for the stabilization of simulation for EM waves propagating in spatially complex structured media properties and arbitrary transitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flechsig, U.; Follath, R.; Reiche, S.
PHASE is a software tool for physical optics simulation based on the stationary phase approximation method. The code is under continuous development since about 20 years and has been used for instance for fundamental studies and ray tracing of various beamlines at the Swiss Light Source. Along with the planning for SwissFEL a new hard X-ray free electron laser under construction, new features have been added to permit practical performance predictions including diffraction effects which emerge with the fully coherent source. We present the application of the package on the example of the ARAMIS 1 beamline at SwissFEL. The X-raymore » pulse calculated with GENESIS and given as an electrical field distribution has been propagated through the beamline to the sample position. We demonstrate the new features of PHASE like the treatment of measured figure errors, apertures and coatings of the mirrors and the application of Fourier optics propagators for free space propagation.« less
A study of photon propagation in free-space based on hybrid radiosity-radiance theorem.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Liang, Jimin; Wang, Lin; Yang, Da'an; Garofalakis, Anikitos; Ripoll, Jorge; Tian, Jie
2009-08-31
Noncontact optical imaging has attracted increasing attention in recent years due to its significant advantages on detection sensitivity, spatial resolution, image quality and system simplicity compared with contact measurement. However, photon transport simulation in free-space is still an extremely challenging topic for the complexity of the optical system. For this purpose, this paper proposes an analytical model for photon propagation in free-space based on hybrid radiosity-radiance theorem (HRRT). It combines Lambert's cosine law and the radiance theorem to handle the influence of the complicated lens and to simplify the photon transport process in the optical system. The performance of the proposed model is evaluated and validated with numerical simulations and physical experiments. Qualitative comparison results of flux distribution at the detector are presented. In particular, error analysis demonstrates the feasibility and potential of the proposed model for simulating photon propagation in free-space.
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
Managing Errors to Reduce Accidents in High Consequence Networked Information Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.
1999-02-01
Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less
NASA Astrophysics Data System (ADS)
Gautam, Ghaneshwar; Surmick, David M.; Parigger, Christian G.
2015-07-01
In this letter, we present a brief comment regarding the recently published paper by Ivković et al., J Quant Spectrosc Radiat Transf 2015;154:1-8. Reference is made to previous experimental results to indicate that self absorption must have occurred; however, when carefully considering error propagation, both widths and peak-separation predict electron densities within the error margins. Yet the diagnosis method and the presented details on the use of the hydrogen beta peak separation are viewed as a welcomed contribution in studies of laser-induced plasma.
Differential phase measurements of D-region partial reflections
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Sechrist, C. F., Jr.
1972-01-01
Differential phase partial reflection measurements were used to deduce D region electron density profiles. The phase difference was measured by taking sums and differences of amplitudes received on an array of crossed dipoles. The reflection model used was derived from Fresnel reflection theory. Seven profiles obtained over the period from 13 October 1971 to 5 November 1971 are presented, along with the results from simultaneous measurements of differential absorption. Some possible sources of error and error propagation are discussed. A collision frequency profile was deduced from the electron concentration calculated from differential phase and differential absorption.
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
DOT National Transportation Integrated Search
1974-05-01
A resting 'normal' ECG can coexist with known angina pectoris, positive angiocardiography and previous myocardial infarction. In contemporary exercise ECG tests, a false positive/false negative total error of 10% is not unusual. Research aimed at imp...
Bayesian operational modal analysis of Jiangyin Yangtze River Bridge
NASA Astrophysics Data System (ADS)
Brownjohn, James Mark William; Au, Siu-Kui; Zhu, Yichen; Sun, Zhen; Li, Binbin; Bassitt, James; Hudson, Emma; Sun, Hongbin
2018-09-01
Vibration testing of long span bridges is becoming a commissioning requirement, yet such exercises represent the extreme of experimental capability, with challenges for instrumentation (due to frequency range, resolution and km-order separation of sensor) and system identification (because of the extreme low frequencies). The challenge with instrumentation for modal analysis is managing synchronous data acquisition from sensors distributed widely apart inside and outside the structure. The ideal solution is precisely synchronised autonomous recorders that do not need cables, GPS or wireless communication. The challenge with system identification is to maximise the reliability of modal parameters through experimental design and subsequently to identify the parameters in terms of mean values and standard errors. The challenge is particularly severe for modes with low frequency and damping typical of long span bridges. One solution is to apply 'third generation' operational modal analysis procedures using Bayesian approaches in both the planning and analysis stages. The paper presents an exercise on the Jiangyin Yangtze River Bridge, a suspension bridge with a 1385 m main span. The exercise comprised planning of a test campaign to optimise the reliability of operational modal analysis, the deployment of a set of independent data acquisition units synchronised using precision oven controlled crystal oscillators and the subsequent identification of a set of modal parameters in terms of mean and variance errors. Although the bridge has had structural health monitoring technology installed since it was completed, this was the first full modal survey, aimed at identifying important features of the modal behaviour rather than providing fine resolution of mode shapes through the whole structure. Therefore, measurements were made in only the (south) tower, while torsional behaviour was identified by a single measurement using a pair of recorders across the carriageway. The modal survey revealed a first lateral symmetric mode with natural frequency 0.0536 Hz with standard error ±3.6% and damping ratio 4.4% with standard error ±88%. First vertical mode is antisymmetric with frequency 0.11 Hz ± 1.2% and damping ratio 4.9% ± 41%. A significant and novel element of the exercise was planning of the measurement setups and their necessary duration linked to prior estimation of the precision of the frequency and damping estimates. The second novelty is the use of the multi-sensor precision synchronised acquisition without external time reference on a structure of this scale. The challenges of ambient vibration testing and modal identification in a complex environment are addressed leveraging on advances in practical implementation and scientific understanding of the problem.
Quantum Graphical Models and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leifer, M.S.; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo Ont., N2L 2Y5; Poulin, D.
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markovmore » Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.« less
Multi-exemplar affinity propagation.
Wang, Chang-Dong; Lai, Jian-Huang; Suen, Ching Y; Zhu, Jun-Yong
2013-09-01
The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP-hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.
Morosi, J; Berti, N; Akrout, A; Picozzi, A; Guasoni, M; Fatome, J
2018-01-22
In this manuscript, we experimentally and numerically investigate the chaotic dynamics of the state-of-polarization in a nonlinear optical fiber due to the cross-interaction between an incident signal and its intense backward replica generated at the fiber-end through an amplified reflective delayed loop. Thanks to the cross-polarization interaction between the two-delayed counter-propagating waves, the output polarization exhibits fast temporal chaotic dynamics, which enable a powerful scrambling process with moving speeds up to 600-krad/s. The performance of this all-optical scrambler was then evaluated on a 10-Gbit/s On/Off Keying telecom signal achieving an error-free transmission. We also describe how these temporal and chaotic polarization fluctuations can be exploited as an all-optical random number generator. To this aim, a billion-bit sequence was experimentally generated and successfully confronted to the dieharder benchmarking statistic tools. Our experimental analysis are supported by numerical simulations based on the resolution of counter-propagating coupled nonlinear propagation equations that confirm the observed behaviors.
Information recovery in propagation-based imaging with decoherence effects
NASA Astrophysics Data System (ADS)
Froese, Heinrich; Lötgering, Lars; Wilhein, Thomas
2017-05-01
During the past decades the optical imaging community witnessed a rapid emergence of novel imaging modalities such as coherent diffraction imaging (CDI), propagation-based imaging and ptychography. These methods have been demonstrated to recover complex-valued scalar wave fields from redundant data without the need for refractive or diffractive optical elements. This renders these techniques suitable for imaging experiments with EUV and x-ray radiation, where the use of lenses is complicated by fabrication, photon efficiency and cost. However, decoherence effects can have detrimental effects on the reconstruction quality of the numerical algorithms involved. Here we demonstrate propagation-based optical phase retrieval from multiple near-field intensities with decoherence effects such as partially coherent illumination, detector point spread, binning and position uncertainties of the detector. Methods for overcoming these systematic experimental errors - based on the decomposition of the data into mutually incoherent modes - are proposed and numerically tested. We believe that the results presented here open up novel algorithmic methods to accelerate detector readout rates and enable subpixel resolution in propagation-based phase retrieval. Further the techniques are straightforward to be extended to methods such as CDI, ptychography and holography.
Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.
2005-01-01
The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560
Smith, Toby O; King, Jonathan J; Hing, Caroline B
2012-11-01
Osteoarthritis (OA) is a leading cause of functional impairment and pain. Proprioceptive defects may be associated with the onset and progression of OA of the knee. The purpose of this study was to determine the effectiveness of proprioceptive exercises for knee OA using meta-analysis. A systematic review was conducted on 12th December 2011 using published (Cochrane Library, MEDLINE, EMBASE, CINAHL, AMED, PubMed, PEDro) and unpublished/trial registry (OpenGrey, the WHO International Clinical Trials Registry Platform, Current Controlled Trials and the UK National Research Register Archive) databases. Studies were included if they were full publications of randomized or non-randomised controlled trials (RCT) comparing a proprioceptive exercise regime, against a non-proprioceptive exercise programme or non-treatment control for adults with knee OA. Methodological appraisal was performed using the PEDro checklist. Seven RCTs including 560 participants (203 males and 357 females) with a mean age of 63 years were eligible. The methodological quality of the evidence base was moderate. Compared to a non-treatment control, proprioceptive exercises significantly improved functional outcomes in people with knee OA during the first 8 weeks following commencement of their exercises (p < 0.02). When compared against a general non-proprioceptive exercise programme, proprioceptive exercises demonstrated similar outcomes, only providing superior results with respect to joint position sense-related measurements such as timed walk over uneven ground (p = 0.03) and joint position angulation error (p < 0.01). Proprioceptive exercises are efficacious in the treatment of knee OA. There is some evidence to indicate the effectiveness of proprioceptive exercises compared to general strengthening exercises in functional outcomes.
Results of Computing Amplitude and Phase of the VLF Wave Using Wave Hop Theory
NASA Astrophysics Data System (ADS)
Pal, Sujay; Basak, Tamal; Chakrabarti, Sandip K.
2011-07-01
We present the basics of the wave hop theory to compute the amplitude and phase of the VLF signals. We use the Indian Navy VTX transmitter at 18.2 kHz as an example of the source and compute the VLF propagation characteristics for several propagation paths using the wave-hop theory. We find the signal amplitudes as a function of distance from the transmitter using wave hop theory in different bearing angles and compare with the same obtained from the Long Wave Propagation Capability (LWPC) code which uses the mode theory. We repeat a similar exercise for the diurnal and seasonal behavior. We note that the signal variation by wave hop theory gives more detailed information in the day time. We further present the spatial variation of the signal amplitude over whole of India at a given time including the effect of sunrise and sunset terminator and also compare the same with that from the mode theory. We point out that the terminator effect is clearly understood in wave hop results than that from the mode theory.
Bongers, Pim J; Diederick van Hove, P; Stassen, Laurents P S; Dankelman, Jenny; Schreuder, Henk W R
2015-01-01
During laparoscopic surgery distractions often occur and multitasking between surgery and other tasks, such as technical equipment handling, is a necessary competence. In psychological research, reduction of adverse effects of distraction is demonstrated when specifically multitasking is trained. The aim of this study was to examine whether multitasking and more specifically task-switching can be trained in a virtual-reality (VR) laparoscopic skills simulator. After randomization, the control group trained separately with an insufflator simulation module and a laparoscopic skills exercise module on a VR simulator. In the intervention group, insufflator module and VR skills exercises were combined to develop a new integrated training in which multitasking was a required competence. At random moments, problems with the insufflator appeared and forced the trainee to multitask. During several repetitions of a different multitask VR skills exercise as posttest, performance parameters (laparoscopy time, insufflator time, and errors) were measured and compared between both the groups as well with a pretest exercise to establish the learning effect. A face-validity questionnaire was filled afterward. University Medical Centre Utrecht, The Netherlands. Medical and PhD students (n = 42) from University Medical Centre Utrecht, without previous experience in laparoscopic simulation, were randomly assigned to either intervention (n = 21) or control group (n = 21). All participants performed better in the posttest exercises without distraction of the insufflator compared with the exercises in which multitasking was necessary to solve the insufflator problems. After training, the intervention group was significantly quicker in solving the insufflator problems (mean = 1.60Log(s) vs 1.70Log(s), p = 0.02). No significant differences between both the groups were seen in laparoscopy time and errors. Multitasking has negative effects on the laparoscopic performance. This study suggests an additional learning effect of training multitasking in VR laparoscopy simulation, because the trainees are able to handle a secondary task (solving insufflator problems) quicker. These results may aid the development of laparoscopy VR training programs in approximating real-life laparoscopic surgery. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Adverse Metabolic Response to Regular Exercise: Is It a Rare or Common Occurrence?
Bouchard, Claude; Blair, Steven N.; Church, Timothy S.; Earnest, Conrad P.; Hagberg, James M.; Häkkinen, Keijo; Jenkins, Nathan T.; Karavirta, Laura; Kraus, William E.; Leon, Arthur S.; Rao, D. C.; Sarzynski, Mark A.; Skinner, James S.; Slentz, Cris A.; Rankinen, Tuomo
2012-01-01
Background Individuals differ in the response to regular exercise. Whether there are people who experience adverse changes in cardiovascular and diabetes risk factors has never been addressed. Methodology/Principal Findings An adverse response is defined as an exercise-induced change that worsens a risk factor beyond measurement error and expected day-to-day variation. Sixty subjects were measured three times over a period of three weeks, and variation in resting systolic blood pressure (SBP) and in fasting plasma HDL-cholesterol (HDL-C), triglycerides (TG), and insulin (FI) was quantified. The technical error (TE) defined as the within-subject standard deviation derived from these measurements was computed. An adverse response for a given risk factor was defined as a change that was at least two TEs away from no change but in an adverse direction. Thus an adverse response was recorded if an increase reached 10 mm Hg or more for SBP, 0.42 mmol/L or more for TG, or 24 pmol/L or more for FI or if a decrease reached 0.12 mmol/L or more for HDL-C. Completers from six exercise studies were used in the present analysis: Whites (N = 473) and Blacks (N = 250) from the HERITAGE Family Study; Whites and Blacks from DREW (N = 326), from INFLAME (N = 70), and from STRRIDE (N = 303); and Whites from a University of Maryland cohort (N = 160) and from a University of Jyvaskyla study (N = 105), for a total of 1,687 men and women. Using the above definitions, 126 subjects (8.4%) had an adverse change in FI. Numbers of adverse responders reached 12.2% for SBP, 10.4% for TG, and 13.3% for HDL-C. About 7% of participants experienced adverse responses in two or more risk factors. Conclusions/Significance Adverse responses to regular exercise in cardiovascular and diabetes risk factors occur. Identifying the predictors of such unwarranted responses and how to prevent them will provide the foundation for personalized exercise prescription. PMID:22666405
Ferasin, Luca; Marcora, Samuele
2009-10-01
Thirteen healthy Labrador retrievers underwent a 5-stage incremental treadmill exercise test to assess its reliability. Blood lactate (BL), heart rate (HR), and body temperature (BT) were measured at rest, after each stage of exercise, and after a 20-min recovery. Reproducibility was assessed by repeating the test after 7 days. Two-way MANOVAs revealed significant differences between consecutive stages, and between values at rest and after recovery. There was also a significant reduction in physiological strain between the first and second trial (learning effect). Test reliability expressed as typical error (BL = 0.22 mmol/l, HR = 9.81 bpm, BT = 0.22 degrees C), coefficient of variation (BL = 19.3%, HR = 7.9% and BT = 0.6%) and test-retest correlation (BL = 0.89, HR = 0.96, BT = 0.95) was good. Results support test reproducibility although the learning effect needs to be controlled when investigating the exercise-related problems commonly observed in this breed.
NASA Astrophysics Data System (ADS)
Lugaz, N.; Kintner, P.
2013-07-01
The Fixed-Φ (FΦ) and Harmonic Mean (HM) fitting methods are two methods to determine the "average" direction and velocity of coronal mass ejections (CMEs) from time-elongation tracks produced by Heliospheric Imagers (HIs), such as the HIs onboard the STEREO spacecraft. Both methods assume a constant velocity in their descriptions of the time-elongation profiles of CMEs, which are used to fit the observed time-elongation data. Here, we analyze the effect of aerodynamic drag on CMEs propagating through interplanetary space, and how this drag affects the result of the FΦ and HM fitting methods. A simple drag model is used to analytically construct time-elongation profiles which are then fitted with the two methods. It is found that higher angles and velocities give rise to greater error in both methods, reaching errors in the direction of propagation of up to 15∘ and 30∘ for the FΦ and HM fitting methods, respectively. This is due to the physical accelerations of the CMEs being interpreted as geometrical accelerations by the fitting methods. Because of the geometrical definition of the HM fitting method, it is more affected by the acceleration than the FΦ fitting method. Overall, we find that both techniques overestimate the initial (and final) velocity and direction for fast CMEs propagating beyond 90∘ from the Sun-spacecraft line, meaning that arrival times at 1 AU would be predicted early (by up to 12 hours). We also find that the direction and arrival time of a wide and decelerating CME can be better reproduced by the FΦ due to the cancelation of two errors: neglecting the CME width and neglecting the CME deceleration. Overall, the inaccuracies of the two fitting methods are expected to play an important role in the prediction of CME hit and arrival times as we head towards solar maximum and the STEREO spacecraft further move behind the Sun.
Validation of a personal fluid loss monitor.
Wickwire, J; Bishop, P A; Green, J M; Richardson, M T; Lomax, R G; Casaru, C; Jones, E; Curtner-Smith, M
2008-02-01
Dehydration raises heat injury risk and reduces performance [ , , ]. The purpose was to validate the Hydra-Alert Jr (Acumen). The Hydra-Alert was tested in two exercise/clothing conditions. Participants wore it while wearing exercise clothing and exercising at a self-selected intensity (n = 8). Others wore the Hydra-Alert while wearing a ballistic-vest and performing an industrial-protocol (n = 8). For each condition, the Hydra-Alert was tested on two occasions (T1 and T2). The Hydra-Alert was tested against nude weight loss for both conditions. The Hydra-Alert had low test-retest reliability for both conditions (average absolute value of the error between Hydra-Alert outputs of T1 and T2 = 0.08 +/- 0.08 percentage points). With exercise-clothing, the Hydra-Alert evidenced low-moderate correlations between percent nude weight loss and Hydra-Alert output at 20 min (r = 0.59-T1, p = 0.13; r = 0.12-T2, p = 0.78), at 40 min (r = 0.93-T1, p = 0.001; r = 0.63-T2, p = 0.10), and at approximately 2 % weight loss (r = 0.21-T1 and T2, p = 0.61 and 0.62, respectively). The correlation at 40 min during T1 fell during T2 suggesting the Hydra-Alert was inconsistent. When wearing a ballistic-vest, the Hydra-Alert had poor validity (T1: r = - 0.29 [p = 0.48] for weight loss vs. monitor; T2: r = 0.11 [p = 0.80]). At the higher levels of dehydration ( approximately 2 %), the Hydra-Alert error was so high as to render its readings of little value. In some cases, the Hydra-Alert could lead to a false level of security if dehydrated. Therefore, the Hydra-Alert is of little use for those who want to measure their fluid loss while exercising in the heat.
NASA Technical Reports Server (NTRS)
Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.
2009-01-01
Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.
Eliminating time dispersion from seismic wave modeling
NASA Astrophysics Data System (ADS)
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
[Application of wavelet neural networks model to forecast incidence of syphilis].
Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song
2011-07-01
To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.
Heterogenic Solid Biofuel Sampling Methodology and Uncertainty Associated with Prompt Analysis
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Patiño, David; Collazo, Joaquín
2010-01-01
Accurate determination of the properties of biomass is of particular interest in studies on biomass combustion or cofiring. The aim of this paper is to develop a methodology for prompt analysis of heterogeneous solid fuels with an acceptable degree of accuracy. Special care must be taken with the sampling procedure to achieve an acceptable degree of error and low statistical uncertainty. A sampling and error determination methodology for prompt analysis is presented and validated. Two approaches for the propagation of errors are also given and some comparisons are made in order to determine which may be better in this context. Results show in general low, acceptable levels of uncertainty, demonstrating that the samples obtained in the process are representative of the overall fuel composition. PMID:20559506
A modified error correction protocol for CCITT signalling system no. 7 on satellite links
NASA Astrophysics Data System (ADS)
Kreuer, Dieter; Quernheim, Ulrich
1991-10-01
Comite Consultatif International des Telegraphe et Telephone (CCITT) Signalling System No. 7 (SS7) provides a level 2 error correction protocol particularly suited for links with propagation delays higher than 15 ms. Not being originally designed for satellite links, however, the so called Preventive Cyclic Retransmission (PCR) Method only performs well on satellite channels when traffic is low. A modified level 2 error control protocol, termed Fix Delay Retransmission (FDR) method is suggested which performs better at high loads, thus providing a more efficient use of the limited carrier capacity. Both the PCR and the FDR methods are investigated by means of simulation and results concerning throughput, queueing delay, and system delay, respectively. The FDR method exhibits higher capacity and shorter delay than the PCR method.
Luther, Stefan; Singh, Rupinder; Gilmour, Robert F.
2010-01-01
The pattern of action potential propagation during various tachyarrhythmias is strongly suspected to be composed of multiple re-entrant waves, but has never been imaged in detail deep within myocardial tissue. An understanding of the nature and dynamics of these waves is important in the development of appropriate electrical or pharmacological treatments for these pathological conditions. We propose a new imaging modality that uses ultrasound to visualize the patterns of propagation of these waves through the mechanical deformations they induce. The new method would have the distinct advantage of being able to visualize these waves deep within cardiac tissue. In this article, we describe one step that would be necessary in this imaging process—the conversion of these deformations into the action potential induced active stresses that produced them. We demonstrate that, because the active stress induced by an action potential is, to a good approximation, only nonzero along the local fiber direction, the problem in our case is actually overdetermined, allowing us to obtain a complete solution. Use of two- rather than three-dimensional displacement data, noise in these displacements, and/or errors in the measurements of the fiber orientations all produce substantial but acceptable errors in the solution. We conclude that the reconstruction of action potential-induced active stress from the deformation it causes appears possible, and that, therefore, the path is open to the development of the new imaging modality. PMID:20499183
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.
2016-03-01
Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
The Use of Orthoptics in Dyslexia.
ERIC Educational Resources Information Center
Haddad, Herskel M.; And Others
1984-01-01
In 73 children (6-13 years old) with reading difficulty, ophthalmological evaluation showed that 18 had overt refractive errors, 18 dyslexia and no ocular anomalies, and 37 impaired fusional amplitudes, 24 of whom were dyslexic. In all Ss with poor fusional amplitudes the reading mechanism could be improved with orthoptic exercises. (Author/CL)
Diagnosis of Enzyme Inhibition Using Excel Solver: A Combined Dry and Wet Laboratory Exercise
ERIC Educational Resources Information Center
Dias, Albino A.; Pinto, Paula A.; Fraga, Irene; Bezerra, Rui M. F.
2014-01-01
In enzyme kinetic studies, linear transformations of the Michaelis-Menten equation, such as the Lineweaver-Burk double-reciprocal transformation, present some constraints. The linear transformation distorts the experimental error and the relationship between "x" and "y" axes; consequently, linear regression of transformed data…