Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Orbital-free bond breaking via machine learning
NASA Astrophysics Data System (ADS)
Snyder, John C.; Rupp, Matthias; Hansen, Katja; Blooston, Leo; Müller, Klaus-Robert; Burke, Kieron
2013-12-01
Using a one-dimensional model, we explore the ability of machine learning to approximate the non-interacting kinetic energy density functional of diatomics. This nonlinear interpolation between Kohn-Sham reference calculations can (i) accurately dissociate a diatomic, (ii) be systematically improved with increased reference data and (iii) generate accurate self-consistent densities via a projection method that avoids directions with no data. With relatively few densities, the error due to the interpolation is smaller than typical errors in standard exchange-correlation functionals.
Impact of the HERA I+II combined data on the CT14 QCD global analysis
NASA Astrophysics Data System (ADS)
Dulat, S.; Hou, T.-J.; Gao, J.; Guzzi, M.; Huston, J.; Nadolsky, P.; Pumplin, J.; Schmidt, C.; Stump, D.; Yuan, C.-P.
2016-11-01
A brief description of the impact of the recent HERA run I+II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of PDFs is given. The new CT14HERA2 PDFs at NLO and NNLO are illustrated. They employ the same parametrization used in the CT14 analysis, but with an additional shape parameter for describing the strange quark PDF. The HERA I+II data are reasonably well described by both CT14 and CT14HERA2 PDFs, and differences are smaller than the PDF uncertainties of the standard CT14 analysis. Both sets are acceptable when the error estimates are calculated in the CTEQ-TEA (CT) methodology and the standard CT14 PDFs are recommended to be continuously used for the analysis of LHC measurements.
Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F
2018-01-01
Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
The Effects of Non-Normality on Type III Error for Comparing Independent Means
ERIC Educational Resources Information Center
Mendes, Mehmet
2007-01-01
The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Özdemir, Vural; Springer, Simon
2018-03-01
Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.
Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke
2011-08-01
TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.
Kataoka, Takeshi; Tsutahara, Michihisa
2010-11-01
The accuracy of the lattice Boltzmann method (LBM) for describing the behavior of a gas in the continuum limit is systematically investigated. The asymptotic analysis for small Knudsen numbers is carried out to derive the corresponding fluid-dynamics-type equations, and the errors of the LBM are estimated by comparing them with the correct fluid-dynamics-type equations. We discuss the following three important cases: (I) the Mach number of the flow is much smaller than the Knudsen number, (II) the Mach number is of the same order as the Knudsen number, and (III) the Mach number is finite. From the von Karman relation, the above three cases correspond to the flows of (I) small Reynolds number, (II) finite Reynolds number, and (III) large Reynolds number, respectively. The analysis is made with the information only of the fundamental properties of the lattice Boltzmann models without stepping into their detailed form. The results are therefore applicable to various lattice Boltzmann models that satisfy the fundamental properties used in the analysis.
Gu, Shoou-Lian Hwang; Gau, Susan Shur-Fen; Tzang, Shyh-Weir; Hsu, Wen-Yau
2013-11-01
We investigated the three parameters (mu, sigma, tau) of ex-Gaussian distribution of RT derived from the Conners' continuous performance test (CCPT) and examined the moderating effects of the energetic factors (the inter-stimulus intervals (ISIs) and Blocks) among these three parameters, especially tau, an index describing the positive skew of RT distribution. We assessed 195 adolescents with DSM-IV ADHD, and 90 typically developing (TD) adolescents, aged 10-16. Participants and their parents received psychiatric interviews to confirm the diagnosis of ADHD and other psychiatric disorders. Participants also received intelligence (WISC-III) and CCPT assessments. We found that participants with ADHD had a smaller mu, and larger tau. As the ISI/Block increased, the magnitude of group difference in tau increased. Among the three ex-Gaussian parameters, tau was positively associated with omission errors, and mu was negatively associated with commission errors. The moderating effects of ISIs and Blocks on tau parameters suggested that the ex-Gaussian parameters could offer more information about the attention state in vigilance task, especially in ADHD. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Frequency-domain gravitational waveform models for inspiraling binary neutron stars
NASA Astrophysics Data System (ADS)
Kawaguchi, Kyohei; Kiuchi, Kenta; Kyutoku, Koutarou; Sekiguchi, Yuichiro; Shibata, Masaru; Taniguchi, Keisuke
2018-02-01
We develop a model for frequency-domain gravitational waveforms from inspiraling binary neutron stars. Our waveform model is calibrated by comparison with hybrid waveforms constructed from our latest high-precision numerical-relativity waveforms and the SEOBNRv2T waveforms in the frequency range of 10-1000 Hz. We show that the phase difference between our waveform model and the hybrid waveforms is always smaller than 0.1 rad for the binary tidal deformability Λ ˜ in the range 300 ≲Λ ˜ ≲1900 and for a mass ratio between 0.73 and 1. We show that, for 10-1000 Hz, the distinguishability for the signal-to-noise ratio ≲50 and the mismatch between our waveform model and the hybrid waveforms are always smaller than 0.25 and 1.1 ×10-5 , respectively. The systematic error of our waveform model in the measurement of Λ ˜ is always smaller than 20 with respect to the hybrid waveforms for 300 ≲Λ ˜≲1900 . The statistical error in the measurement of binary parameters is computed employing our waveform model, and we obtain results consistent with the previous studies. We show that the systematic error of our waveform model is always smaller than 20% (typically smaller than 10%) of the statistical error for events with a signal-to-noise ratio of 50.
The advanced receiver 2: Telemetry test results in CTA 21
NASA Technical Reports Server (NTRS)
Hinedi, S.; Bevan, R.; Marina, M.
1991-01-01
Telemetry tests with the Advanced Receiver II (ARX II) in Compatibility Test Area 21 are described. The ARX II was operated in parallel with a Block-III Receiver/baseband processor assembly combination (BLK-III/BPA) and a Block III Receiver/subcarrier demodulation assembly/symbol synchronization assembly combination (BLK-III/SDA/SSA). The telemetry simulator assembly provided the test signal for all three configurations, and the symbol signal to noise ratio as well as the symbol error rates were measured and compared. Furthermore, bit error rates were also measured by the system performance test computer for all three systems. Results indicate that the ARX-II telemetry performance is comparable and sometimes superior to the BLK-III/BPA and BLK-III/SDA/SSA combinations.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Dixon, Curt B; Deitrick, Ronald W; Pierce, Joseph R; Cutrufello, Paul T; Drapeau, Linda L
2005-02-01
The purpose of this study was to compare percent body fat (%BF) estimated by air displacement plethysmography (ADP) and leg-to-leg bioelectrical impedance analysis (LBIA) with hydrostatic weighing (HW) in a group (n = 25) of NCAA Division III collegiate wrestlers. Body composition was assessed during the preseason wrestling weight certification program (WCP) using the NCAA approved methods (HW, 3-site skinfold [SF], and ADP) and LBIA, which is currently an unaccepted method of assessment. A urine specific gravity less than 1.020, measured by refractometry, was required before all testing. Each subject had all of the assessments performed on the same day. LBIA measurements (Athletic mode) were determined using a Tanita body fat analyzer (model TBF-300A). Hydrostatic weighing, corrected for residual lung volume, was used as the criterion measurement. The %BF data (mean +/- SD) were LBIA (12.3 +/- 4.6), ADP (13.8 +/- 6.3), SF (14.2 +/- 5.3), and HW (14.5 +/- 6.0). %BF estimated by LBIA was significantly (p < 0.01) smaller than HW and SF. There were no significant differences in body density or %BF estimated by ADP, SF, and HW. All methods showed significant correlations (r = 0.80-0.96; p < 0.01) with HW. The standard errors of estimate (SEE) for %BF were 1.68, 1.87, and 3.60%; pure errors (PE) were 1.88, 1.94, and 4.16% (ADP, SF, and LBIA, respectively). Bland-Atman plots for %BF demonstrated no systematic bias for ADP, SF, and LBIA when compared with HW. These preliminary findings support the use of ADP and SF for estimating %BF during the NCAA WCP in Division III wrestlers. LBIA, which consistently underestimated %BF, is not supported by these data as a valid assessment method for this athletic group.
Object-oriented wavefront correction in an asymmetric amplifying high-power laser system
NASA Astrophysics Data System (ADS)
Yang, Ying; Yuan, Qiang; Wang, Deen; Zhang, Xin; Dai, Wanjun; Hu, Dongxia; Xue, Qiao; Zhang, Xiaolu; Zhao, Junpu; Zeng, Fa; Wang, Shenzhen; Zhou, Wei; Zhu, Qihua; Zheng, Wanguo
2018-05-01
An object-oriented wavefront control method is proposed aiming for excellent near-field homogenization and far-field distribution in an asymmetric amplifying high-power laser system. By averaging the residual errors of the propagating beam, smaller pinholes could be employed on the spatial filters to improve the beam quality. With this wavefront correction system, the laser performance of the main amplifier system in the Shen Guang-III laser facility has been improved. The residual wavefront aberration at the position of each pinhole is below 2 µm (peak-to-valley). For each pinhole, 95% of the total laser energy is enclosed within a circle whose diameter is no more than six times the diffraction limit. At the output of the main laser system, the near-field modulation and contrast are 1.29% and 7.5%, respectively, and 95% of the 1ω (1053 nm) beam energy is contained within a 39.8 µrad circle (6.81 times the diffraction limit) under a laser fluence of 5.8 J cm-2. The measured 1ω focal spot size and near-field contrast are better than the design values of the Shen Guang-III laser facility.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
NASA Technical Reports Server (NTRS)
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
Saturation of the anisoplanatic error in horizontal imaging scenarios
NASA Astrophysics Data System (ADS)
Beck, Jeffrey; Bos, Jeremy P.
2017-09-01
We evaluate the piston-removed anisoplanatic error for smaller apertures imaging over long horizontal paths. Previous works have shown that the piston and tilt compensated anisoplanatic error saturates to values less than one squared radian. Under these conditions the definition of the isoplanatic angle is unclear. These works focused on nadir pointing telescope systems with aperture sizes between five meters and one half meter. We directly extend this work to horizontal imaging scenarios with aperture sizes smaller than one half meter. We assume turbulence is constant along the imaging path and that the ratio of the aperture size to the atmospheric coherence length is on the order of unity.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; ...
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
A modified varying-stage adaptive phase II/III clinical trial design.
Dong, Gaohong; Vandemeulebroecke, Marc
2016-07-01
Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Aoyama, Yumiko; Kaibara, Atsunori; Takada, Akitsugu; Nishimura, Tetsuya; Katashima, Masataka; Sawamoto, Taiji
2013-04-01
Purpose Population pharmacokinetics (PK) of sepantronium bromide (YM155) was characterized in patients with non-small cell lung cancer, hormone refractory prostate cancer, or unresectable stage III or IV melanoma and enrolled in one of three phase 2 studies conducted in Europe or the U.S. Method Sepantronium was administered as a continuous intravenous infusion (CIVI) at 4.8 mg/m(2)/day over 7 days every 21 days. Population PK analysis was performed using a linear one-compartment model involving total body clearance (CL) and volume of distribution with an inter-individual random effect on CL and a proportional residual errors to describe 578 plasma sepantronium concentrations obtained from a total of 96 patients by NONMEM Version VI. The first-order conditional estimation method with interaction was applied. Results The one-compartment model with one random effect on CL and two different proportional error models provided an adequate description of the data. Creatinine clearance (CLCR), cancer type, and alanine aminotransferase (ALT) were recognized as significant covariates of CL. CLCR was the most influential covariate on sepantronium exposure and predicted to contribute to a 25 % decrease in CL for patients with moderately impaired renal function (CLCR = 40 mL/min) compared to patients with normal CLCR. Cancer type and ALT had a smaller but nonetheless significant contribution. Other patient characteristics such as age, gender, and race were not considered as significant covariates of CL. Conclusions The results provide the important information for optimizing the therapeutic efficacy and minimizing the toxicity for sepantronium in cancer therapy.
Cole, Sindy; McNally, Gavan P
2007-10-01
Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
ERIC Educational Resources Information Center
Greve, Kevin W.; Springer, Steven; Bianchini, Kevin J.; Black, F. William; Heinly, Matthew T.; Love, Jeffrey M.; Swift, Douglas A.; Ciota, Megan A.
2007-01-01
This study examined the sensitivity and false-positive error rate of reliable digit span (RDS) and the WAIS-III Digit Span (DS) scaled score in persons alleging toxic exposure and determined whether error rates differed from published rates in traumatic brain injury (TBI) and chronic pain (CP). Data were obtained from the files of 123 persons…
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
16 CFR 1220.1 - Scope, compliance dates, and definitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
...; (iii) Has an interior length dimension either greater than 139.7 cm (55 in.) or smaller than 126.3 cm (49 3/4 in.), or, an interior width dimension either greater than 77.7 cm (305/8 in.) or smaller than... crib—a non-full-size baby crib with an interior length dimension smaller than 126.3 cm (493/4 in.), or...
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Survey of Radar Refraction Error Corrections
2016-11-01
ELECTRONIC TRAJECTORY MEASUREMENTS GROUP RCC 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS DISTRIBUTION A: Approved for...DOCUMENT 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS November 2016 Prepared by Electronic...This page intentionally left blank. Survey of Radar Refraction Error Corrections, RCC 266-16 iii Table of Contents Preface
PID Controller Design for FES Applied to Ankle Muscles in Neuroprosthesis for Standing Balance
Rouhani, Hossein; Same, Michael; Masani, Kei; Li, Ya Qi; Popovic, Milos R.
2017-01-01
Closed-loop controlled functional electrical stimulation (FES) applied to the lower limb muscles can be used as a neuroprosthesis for standing balance in neurologically impaired individuals. The objective of this study was to propose a methodology for designing a proportional-integral-derivative (PID) controller for FES applied to the ankle muscles toward maintaining standing balance for several minutes and in the presence of perturbations. First, a model of the physiological control strategy for standing balance was developed. Second, the parameters of a PID controller that mimicked the physiological balance control strategy were determined to stabilize the human body when modeled as an inverted pendulum. Third, this PID controller was implemented using a custom-made Inverted Pendulum Standing Apparatus that eliminated the effect of visual and vestibular sensory information on voluntary balance control. Using this setup, the individual-specific FES controllers were tested in able-bodied individuals and compared with disrupted voluntary control conditions in four experimental paradigms: (i) quiet-standing; (ii) sudden change of targeted pendulum angle (step response); (iii) balance perturbations that simulate arm movements; and (iv) sudden change of targeted angle of a pendulum with individual-specific body-weight (step response). In paradigms (i) to (iii), a standard 39.5-kg pendulum was used, and 12 subjects were involved. In paradigm (iv) 9 subjects were involved. Across the different experimental paradigms and subjects, the FES-controlled and disrupted voluntarily-controlled pendulum angle showed root mean square errors of <1.2 and 2.3 deg, respectively. The root mean square error (all paradigms), rise time, settle time, and overshoot [paradigms (ii) and (iv)] in FES-controlled balance were significantly smaller or tended to be smaller than those observed with voluntarily-controlled balance, implying improved steady-state and transient responses of FES-controlled balance. At the same time, the FES-controlled balance required similar torque levels (no significant difference) as voluntarily-controlled balance. The implemented PID parameters were to some extent consistent among subjects for standard weight conditions and did not require prolonged individual-specific tuning. The proposed methodology can be used to design FES controllers for closed-loop controlled neuroprostheses for standing balance. Further investigation of the clinical implementation of this approach for neurologically impaired individuals is needed. PMID:28676739
NASA Technical Reports Server (NTRS)
Lung, Shun-Fat; Ko, William L.
2016-01-01
In support of the Adaptive Compliant Trailing Edge [ACTE] project at the NASA Armstrong Flight Research Center, displacement transfer functions were applied to the swept wing of a Gulfstream G-III airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) to obtain deformed shape predictions. Four strainsensing lines (two on the lower surface, two on the upper surface) were used to calculate the deformed shape of the G III wing under bending and torsion. There being an insufficient number of surface strain sensors, the existing G III wing box finite element model was used to generate simulated surface strains for input to the displacement transfer functions. The resulting predicted deflections have good correlation with the finite-element generated deflections as well as the measured deflections from the ground load calibration test. The convergence study showed that the displacement prediction error at the G III wing tip can be reduced by increasing the number of strain stations (for each strain-sensing line) down to a minimum error of l.6 percent at 17 strain stations; using more than 17 strain stations yielded no benefit because the error slightly increased to 1.9% when 32 strain stations were used.
Zhang, Wenjian; Abramovitch, Kenneth; Thames, Walter; Leon, Inga-Lill K; Colosi, Dan C; Goren, Arthur D
2009-07-01
The objective of this study was to compare the operating efficiency and technical accuracy of 3 different rectangular collimators. A full-mouth intraoral radiographic series excluding central incisor views were taken on training manikins by 2 groups of undergraduate dental and dental hygiene students. Three types of rectangular collimator were used: Type I ("free-hand"), Type II (mechanical interlocking), and Type III (magnetic collimator). Eighteen students exposed one side of the manikin with a Type I collimator and the other side with a Type II. Another 15 students exposed the manikin with Type I and Type III respectively. Type I is currently used for teaching and patient care at our institution and was considered as the control to which both Types II and III were compared. The time necessary to perform the procedure, subjective user friendliness, and the number of technique errors (placement, projection, and cone cut errors) were assessed. The Student t test or signed rank test was used to determine statistical difference (P
Rank score and permutation testing alternatives for regression quantile estimates
Cade, B.S.; Richards, J.D.; Mielke, P.W.
2006-01-01
Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.
Pailing, Patricia E; Segalowitz, Sidney J
2004-01-01
This study examines changes in the error-related negativity (ERN/Ne) related to motivational incentives and personality traits. ERPs were gathered while adults completed a four-choice letter task during four motivational conditions. Monetary incentives for finger and hand accuracy were altered across motivation conditions to either be equal or favor one type of accuracy over the other in a 3:1 ratio. Larger ERN/Ne amplitudes were predicted with increased incentives, with personality moderating this effect. Results were as expected: Individuals higher on conscientiousness displayed smaller motivation-related changes in the ERN/Ne. Similarly, those low on neuroticism had smaller effects, with the effect of Conscientiousness absent after accounting for Neuroticism. These results emphasize an emotional/evaluative function for the ERN/Ne, and suggest that the ability to selectively invest in error monitoring is moderated by underlying personality.
NASA Technical Reports Server (NTRS)
Smith, Brandon D.; Boyd, Iain D.; Kamhawi, Hani
2014-01-01
The sensitivity of xenon ionization rates to collision cross-sections is studied within the framework of a hybrid-PIC model of a Hall thruster discharge. A revised curve fit based on the Drawin form is proposed and is shown to better reproduce the measured crosssections at high electron energies, with differences in the integrated rate coefficients being on the order of 10% for electron temperatures between 20 eV and 30 eV. The revised fit is implemented into HPHall and the updated model is used to simulate NASA's HiVHAc EDU2 Hall thruster at discharge voltages of 300, 400, and 500 V. For all three operating points, the revised cross-sections result in an increase in the predicted thrust and anode efficiency, reducing the error relative to experimental performance measurements. Electron temperature and ionization reaction rates are shown to follow the trends expected based on the integrated rate coefficients. The effects of triply-charged xenon are also assessed. The predicted thruster performance is found to have little or no dependence on the presence of triply-charged ions. The fraction of ion current carried by triply-charged ions is found to be on the order of 1% and increases slightly with increasing discharge voltage. The reaction rates for the 0?III, I?III, and II?III ionization reactions are found to be of similar order of magnitude and are about one order of magnitude smaller than the rate of 0?II ionization in the discharge channel.
Scaling fixed-field alternating gradient accelerators with a small orbit excursion.
Machida, Shinji
2009-10-16
A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
NASA Astrophysics Data System (ADS)
Bell, Stephen C.; Ginsburg, Marc A.; Rao, Prabhakara P.
An important part of space launch vehicle mission planning for a planetary mission is the integrated analysis of guidance and performance dispersions for both booster and upper stage vehicles. For the Mars Observer mission, an integrated trajectory analysis was used to maximize the scientific payload and to minimize injection errors by optimizing the energy management of both vehicles. This was accomplished by designing the Titan III booster vehicle to inject into a hyperbolic departure plane, and the Transfer Orbit Stage (TOS) to correct any booster dispersions. An integrated Monte Carlo analysis of the performance and guidance dispersions of both vehicles provided sensitivities, an evaluation of their guidance schemes and an injection error covariance matrix. The polynomial guidance schemes used for the Titan III variable flight azimuth computations and the TOS solid rocket motor ignition time and burn direction derivations accounted for a wide variation of launch times, performance dispersions, and target conditions. The Mars Observer spacecraft was launched on 25 September 1992 on the Titan III/TOS vehicle. The post flight analysis indicated that a near perfect park orbit injection was achieved, followed by a trans-Mars injection with less than 2sigma errors.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
16 CFR § 1220.1 - Scope, compliance dates, and definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... affecting commerce and other purposes; (iii) Has an interior length dimension either greater than 139.7 cm (55 in.) or smaller than 126.3 cm (49 3/4 in.), or, an interior width dimension either greater than 77... components. (D) Undersize crib—a non-full-size baby crib with an interior length dimension smaller than 126.3...
16 CFR 1220.1 - Scope, compliance dates, and definitions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... affecting commerce and other purposes; (iii) Has an interior length dimension either greater than 139.7 cm (55 in.) or smaller than 126.3 cm (49 3/4 in.), or, an interior width dimension either greater than 77... components. (D) Undersize crib—a non-full-size baby crib with an interior length dimension smaller than 126.3...
16 CFR 1220.1 - Scope, compliance dates, and definitions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... affecting commerce and other purposes; (iii) Has an interior length dimension either greater than 139.7 cm (55 in.) or smaller than 126.3 cm (49 3/4 in.), or, an interior width dimension either greater than 77... components. (D) Undersize crib—a non-full-size baby crib with an interior length dimension smaller than 126.3...
Platt, Tyson L; Zachar, Peter; Ray, Glen E; Lobello, Steven G; Underhill, Andrea T
2007-04-01
Studies have found that Wechsler scale administration and scoring proficiency is not easily attained during graduate training. These findings may be related to methodological issues. Using a single-group repeated measures design, this study documents statistically significant, though modest, error reduction on the WAIS-III and WISC-III during a graduate course in assessment. The study design does not permit the isolation of training factors related to error reduction, or assessment of whether error reduction is a function of mere practice. However, the results do indicate that previous study findings of no or inconsistent improvement in scoring proficiency may have been the result of methodological factors. Implications for teaching individual intelligence testing and further research are discussed.
ERIC Educational Resources Information Center
Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.
2009-01-01
The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…
NASA Technical Reports Server (NTRS)
Saitoh, Naoko; Hayashida, S.; Sugita, T.; Nakajima, H.; Yokota, T.; Hayashi, M.; Shiraishi, K.; Kanzawa, H.; Ejiri, M. K.; Irie, H.;
2006-01-01
The Improved Limb Atmospheric Spectrometer (ILAS) II on board the Advanced Earth Observing Satellite (ADEOS) II observed stratospheric aerosol in visible/near-infrared/infrared spectra over high latitudes in the Northern and Southern Hemispheres. Observations were taken intermittently from January to March, and continuously from April through October, 2003. We assessed the data quality of ILAS-II version 1.4 aerosol extinction coefficients at 780 nm from comparisons with the Stratospheric Aerosol and Gas Experiment (SAGE) II, SAGE III, and the Polar Ozone and Aerosol Measurement (POAM) III aerosol data. At heights below 20 km in the Northern Hemisphere, aerosol extinction coefficients from ILAS-II agreed with those from SAGE II and SAGE III within 10%, and with those from POAM III within 15%. From 20 to 26 km, ILAS-II aerosol extinction coefficients were smaller than extinction coefficients from the other sensors; differences between ILAS-II and SAGE II ranged from 10% at 20 km to 34% at 26 km. ILAS-II aerosol extinction coefficients from 20 to 25 km in February over the Southern Hemisphere had a negative bias (12-66%) relative to SAGE II aerosol data. The bias increased with increasing altitude. Comparisons between ILAS-II and POAM III aerosol extinction coefficients from January to May in the Southern Hemisphere (defined as the non-Polar Stratospheric Cloud (PSC) season ) yielded qualitatively similar results. From June to October (defined as the PSC season ), aerosol extinction coefficients from ILAS-II were smaller than those from POAM III above 17 km, as in the case of the non-PSC season; however, ILAS-II and POAM III aerosol data were within 15% of each other from 12 to 17 km.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
Treece, C
1982-05-01
The author describes the use of the DSM-III's diagnostic criteria and classification system as a research instrument and discusses some of the advantages and drawbacks of DMS-III for a specific type of study. A rearrangement of the hierarchical order of the DSM-III diagnostic classes is suggested. This rearrangement provides for levels of certainty in analyzing interrater reliability and offers a simplified framework for summarizing group data. When this approach is combined with a structured interview and response format, it provides a flexible way of managing a large classification system for a smaller study without sacrificing standardization.
Yoshimura, Etsuro; Kohdr, Hicham; Mori, Satoshi; Hider, Robert C
2011-08-01
The phytosiderophores, mugineic acid (MA) and epi-hydroxymugineic acid (HMA), together with a related compound, nicotianamine (NA), were investigated for their ability to bind Al(III). Potentiometric titration analysis demonstrated that MA and HMA bind Al(III), in contrast to NA which does not under normal physiological conditions. With MA and HMA, in addition to the Al complex (AlL), the protonated (AlLH) and deprotonated (AlLH(-1)) complexes were identified from an analysis of titration curves, where L denotes the phytosiderophore form in which all the carboxylate functions are ionized. The equilibrium formation constants of the Al(III) phytosiderophore complexes are much smaller than those of the corresponding Fe(III) complexes. The higher selectivity of phytosiderophores for Fe(III) over Al(III) facilitates Fe(III) acquisition in alkaline conditions where free Al(III) levels are higher than free Fe(III) levels.
Rong, Hao; Tian, Jin
2015-05-01
The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.
Nearest neighbor: The low-mass Milky Way satellite Tucana III
Simon, J. D.; Li, T. S.; Drlica-Wagner, A.; ...
2017-03-17
Here, we present Magellan/IMACS spectroscopy of the recently discovered Milky Way satellite Tucana III (Tuc III). We identify 26 member stars in Tuc III from which we measure a mean radial velocity of v hel = -102.3 ± 0.4 (stat.) ± 2.0 (sys.)more » $$\\mathrm{km}\\,{{\\rm{s}}}^{-1}$$, a velocity dispersion of $${0.1}_{-0.1}^{+0.7}$$ $$\\mathrm{km}\\,{{\\rm{s}}}^{-1}$$, and a mean metallicity of $${\\rm{[Fe/H]}}=-{2.42}_{-0.08}^{+0.07}$$. The upper limit on the velocity dispersion is σ < 1.5 $$\\mathrm{km}\\,{{\\rm{s}}}^{-1}$$ at 95.5% confidence, and the corresponding upper limit on the mass within the half-light radius of Tuc III is 9.0 × 10 4 M ⊙. We cannot rule out mass-to-light ratios as large as 240 M ⊙/L ⊙ for Tuc III, but much lower mass-to-light ratios that would leave the system baryon-dominated are also allowed. We measure an upper limit on the metallicity spread of the stars in Tuc III of 0.19 dex at 95.5% confidence. Tuc III has a smaller metallicity dispersion and likely a smaller velocity dispersion than any known dwarf galaxy, but a larger size and lower surface brightness than any known globular cluster. Its metallicity is also much lower than those of the clusters with similar luminosity. We therefore tentatively suggest that Tuc III is the tidally stripped remnant of a dark matter-dominated dwarf galaxy, but additional precise velocity and metallicity measurements will be necessary for a definitive classification. If Tuc III is indeed a dwarf galaxy, it is one of the closest external galaxies to the Sun. Because of its proximity, the most luminous stars in Tuc III are quite bright, including one star at V = 15.7 that is the brightest known member star of an ultra-faint satellite.« less
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M
2010-04-01
In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.
Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.
Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán
2016-07-12
Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, J. D.; Li, T. S.; Drlica-Wagner, A.
Here, we present Magellan/IMACS spectroscopy of the recently discovered Milky Way satellite Tucana III (Tuc III). We identify 26 member stars in Tuc III from which we measure a mean radial velocity of v hel = -102.3 ± 0.4 (stat.) ± 2.0 (sys.)more » $$\\mathrm{km}\\,{{\\rm{s}}}^{-1}$$, a velocity dispersion of $${0.1}_{-0.1}^{+0.7}$$ $$\\mathrm{km}\\,{{\\rm{s}}}^{-1}$$, and a mean metallicity of $${\\rm{[Fe/H]}}=-{2.42}_{-0.08}^{+0.07}$$. The upper limit on the velocity dispersion is σ < 1.5 $$\\mathrm{km}\\,{{\\rm{s}}}^{-1}$$ at 95.5% confidence, and the corresponding upper limit on the mass within the half-light radius of Tuc III is 9.0 × 10 4 M ⊙. We cannot rule out mass-to-light ratios as large as 240 M ⊙/L ⊙ for Tuc III, but much lower mass-to-light ratios that would leave the system baryon-dominated are also allowed. We measure an upper limit on the metallicity spread of the stars in Tuc III of 0.19 dex at 95.5% confidence. Tuc III has a smaller metallicity dispersion and likely a smaller velocity dispersion than any known dwarf galaxy, but a larger size and lower surface brightness than any known globular cluster. Its metallicity is also much lower than those of the clusters with similar luminosity. We therefore tentatively suggest that Tuc III is the tidally stripped remnant of a dark matter-dominated dwarf galaxy, but additional precise velocity and metallicity measurements will be necessary for a definitive classification. If Tuc III is indeed a dwarf galaxy, it is one of the closest external galaxies to the Sun. Because of its proximity, the most luminous stars in Tuc III are quite bright, including one star at V = 15.7 that is the brightest known member star of an ultra-faint satellite.« less
NASA Astrophysics Data System (ADS)
Hartanto, R.; Jantra, M. A. C.; Santosa, S. A. B.; Purnomoadi, A.
2018-01-01
The purpose of this research was to find an appropriate relationship model between the feed energy and protein ratio with the amount of production and quality of milk proteins. This research was conducted at Getasan Sub-district, Semarang Regency, Central Java Province, Indonesia using 40 samples (Holstein Friesian cattle, lactation period II-III and lactation month 3-4). Data were analyzed using linear and quadratic regressions, to predict the production and quality of milk protein from feed energy and protein ratio that describe the diet. The significance of model was tested using analysis of variance. Coefficient of determination (R2), residual variance (RV) and root mean square prediction error (RMSPE) were reported for the developed equations as an indicator of the goodness of model fit. The results showed no relationship in milk protein (kg), milk casein (%), milk casein (kg) and milk urea N (mg/dl) as function of CP/TDN. The significant relationship was observed in milk production (L or kg) and milk protein (%) as function of CP/TDN, both in linear and quadratic models. In addition, a quadratic change in milk production (L) (P = 0.003), milk production (kg) (P = 0.003) and milk protein concentration (%) (P = 0.026) were observed with increase of CP/TDN. It can be concluded that quadratic equation was the good fitting model for this research, because quadratic equation has larger R2, smaller RV and smaller RMSPE than those of linear equation.
Estimation of the optical errors on the luminescence imaging of water for proton beam
NASA Astrophysics Data System (ADS)
Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-04-01
Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.
34 CFR 682.410 - Fiscal, administrative, and enforcement requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with applicable legal and accounting standards; (iii) The Secretary's equitable share of... any other errors in its accounting or reporting as soon as practicable after the errors become known... guaranty agency's agreements with the Secretary; and (C) Market prices of comparable goods or services. (b...
Interaction of finger enslaving and error compensation in multiple finger force production.
Martin, Joel R; Latash, Mark L; Zatsiorsky, Vladimir M
2009-01-01
Previous studies have documented two patterns of finger interaction during multi-finger pressing tasks, enslaving and error compensation, which do not agree with each other. Enslaving is characterized by positive correlation between instructed (master) and non-instructed (slave) finger(s) while error compensation can be described as a pattern of negative correlation between master and slave fingers. We hypothesize that pattern of finger interaction, enslaving or compensation depends on the initial force level and the magnitude of the targeted force change. Subjects were instructed to press with four fingers (I index, M middle, R ring, and L little) from a specified initial force to target forces following a ramp target line. Force-force relations between master and each of three slave fingers were analyzed during the ramp phase of trials by calculating correlation coefficients within each master-slave pair and then two-factor ANOVA was performed to determine effect of initial force and force increase on the correlation coefficients. It was found that, as initial force increased, the value of the correlation coefficient decreased and in some cases became negative, i.e. the enslaving transformed into error compensation. Force increase magnitude had a smaller effect on the correlation coefficients. The observations support the hypothesis that the pattern of inter-finger interaction--enslaving or compensation--depends on the initial force level and, to a smaller degree, on the targeted magnitude of the force increase. They suggest that the controller views tasks with higher steady-state forces and smaller force changes as implying a requirement to avoid large changes in the total force.
Variations in tooth size and arch dimensions in Malay schoolchildren.
Hussein, Khalid W; Rajion, Zainul A; Hassan, Rozita; Noor, Siti Noor Fazliah Mohd
2009-11-01
To compare the mesio-distal tooth sizes and dental arch dimensions in Malay boys and girls with Class I, Class II and Class III malocclusions. The dental casts of 150 subjects (78 boys, 72 girls), between 12 and 16 years of age, with Class I, Class II and Class III malocclusions were used. Each group consisted of 50 subjects. An electronic digital caliper was used to measure the mesio-distal tooth sizes of the upper and lower permanent teeth (first molar to first molar), the intercanine and intermolar widths. The arch lengths and arch perimeters were measured with AutoCAD software (Autodesk Inc., San Rafael, CA, U.S.A.). The mesio-distal dimensions of the upper lateral incisors and canines in the Class I malocclusion group were significantly smaller than the corresponding teeth in the Class III and Class II groups, respectively. The lower canines and first molars were significantly smaller in the Class I group than the corresponding teeth in the Class II group. The lower intercanine width was significantly smaller in the Class II group as compared with the Class I group, and the upper intermolar width was significantly larger in Class III group as compared with the Class II group. There were no significant differences in the arch perimeters or arch lengths. The boys had significantly wider teeth than the girls, except for the left lower second premolar. The boys also had larger upper and lower intermolar widths and lower intercanine width than the girls. Small, but statistically significant, differences in tooth sizes are not necessarily accompanied by significant arch width, arch length or arch perimeter differences. Generally, boys have wider teeth, larger lower intercanine width and upper and lower intermolar widths than girls.
Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias
2014-06-10
We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Rong, Hao; Tian, Jin; Zhao, Tingdi
2016-01-01
In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A high accuracy magnetic heading system composed of fluxgate magnetometers and a microcomputer
NASA Astrophysics Data System (ADS)
Liu, Sheng-Wu; Zhang, Zhao-Nian; Hung, James C.
The authors present a magnetic heading system consisting of two fluxgate magnetometers and a single-chip microcomputer. The system, when compared to gyro compasses, is smaller in size, lighter in weight, simpler in construction, quicker in reaction time, free from drift, and more reliable. Using a microcomputer in the system, heading error due to compass deviation, sensor offsets, scale factor uncertainty, and sensor tilts can be compensated with the help of an error model. The laboratory test of a typical system showed that the accuracy of the system was improved from more than 8 deg error without error compensation to less than 0.3 deg error with compensation.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Remington, Paul J.; Walker, Bruce E.
2003-01-01
A test program to demonstrate simplification of Active Noise Control (ANC) systems relative to standard techniques was performed on the NASA Glenn Active Noise Control Fan from May through September 2001. The target mode was the m = 2 circumferential mode generated by the rotor-stator interaction at 2BPF. Seven radials (combined inlet and exhaust) were present at this condition. Several different error-sensing strategies were implemented. Integration of the error-sensors with passive treatment was investigated. These were: (i) an in-duct linear axial array, (ii) an induct steering array, (iii) a pylon-mounted array, and (iv) a near-field boom array. The effect of incorporating passive treatment was investigated as well as reducing the actuator count. These simplified systems were compared to a fully ANC specified system. Modal data acquired using the Rotating Rake are presented for a range of corrected fan rpm. Simplified control has been demonstrated to be possible but requires a well-known and dominant mode signature. The documented results here in are part III of a three-part series of reports with the same base title. Part I and II document the control system and error-sensing design and implementation.
Gole, Markus; Köchel, Angelika; Schäfer, Axel; Schienle, Anne
2012-03-01
The goal of the present study was to investigate a threat engagement, disengagement, and sensitivity bias in individuals suffering from pathological worry. Twenty participants high in worry proneness and 16 control participants low in worry proneness completed an emotional go/no-go task with worry-related threat words and neutral words. Shorter reaction times (i.e., threat engagement bias), smaller omission error rates (i.e., threat sensitivity bias), and larger commission error rates (i.e., threat disengagement bias) emerged only in the high worry group when worry-related words constituted the go-stimuli and neutral words the no-go stimuli. Also, smaller omission error rates as well as larger commission error rates were observed in the high worry group relative to the low worry group when worry-related go stimuli and neutral no-go stimuli were used. The obtained results await further replication within a generalized anxiety disorder sample. Also, further samples should include men as well. Our data suggest that worry-prone individuals are threat-sensitive, engage more rapidly with aversion, and disengage harder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
The effect of early deprivation on executive attention in middle childhood.
Loman, Michelle M; Johnson, Anna E; Westerlund, Alissa; Pollak, Seth D; Nelson, Charles A; Gunnar, Megan R
2013-01-01
Children reared in deprived environments, such as institutions for the care of orphaned or abandoned children, are at increased risk for attention and behavior regulation difficulties. This study examined the neurobehavioral correlates of executive attention in post institutionalized (PI) children. The performance and event-related potentials (ERPs) of 10- and 11-year-old internationally adopted PI children on two executive attention tasks, go/no-go and Flanker, were compared with two groups: children internationally adopted early from foster care (PF) and nonadopted children (NA). Behavioral measures suggested problems with sustained attention, with PIs performing more poorly on go trials and not on no-go trials of the go/no-go and made more errors on both congruent and incongruent trials on the Flanker. ERPs suggested differences in inhibitory control and error monitoring, as PIs had smaller N2 amplitude on go/no-go and smaller error-related negativity on Flanker. This pattern of results raises questions regarding the nature of attention difficulties for PI children. The behavioral errors are not specific to executive attention and instead likely reflect difficulties in overall sustained attention. The ERP results are consistent with neural activity related to deficits in inhibitory control (N2) and error monitoring (error-related negativity). Questions emerge regarding the similarity of attention regulatory difficulties in PIs to those experienced by non-PI children with ADHD. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Alderete, John; Davies, Monica
2018-04-01
This work describes a methodology of collecting speech errors from audio recordings and investigates how some of its assumptions affect data quality and composition. Speech errors of all types (sound, lexical, syntactic, etc.) were collected by eight data collectors from audio recordings of unscripted English speech. Analysis of these errors showed that: (i) different listeners find different errors in the same audio recordings, but (ii) the frequencies of error patterns are similar across listeners; (iii) errors collected "online" using on the spot observational techniques are more likely to be affected by perceptual biases than "offline" errors collected from audio recordings; and (iv) datasets built from audio recordings can be explored and extended in a number of ways that traditional corpus studies cannot be.
"Testing during Study Insulates against the Buildup of Proactive Interference": Correction
ERIC Educational Resources Information Center
Szpunar, Karl K.; McDermott, Kathleen B.; Roedigger, Henry L., III
2009-01-01
Reports an error in "Testing during study insulates against the buildup of proactive interference" by Karl K. Szpunar, Kathleen B. McDermott and Henry L. Roediger III ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Nov], Vol 34[6], 1392-1399). Incorrect figures were printed due to an error in the…
78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...
Choi, Kai Yip; Yu, Wing Yan; Lam, Christie Hang I; Li, Zhe Chuang; Chin, Man Pan; Lakshmanan, Yamunadevi; Wong, Francisca Siu Yin; Do, Chi Wai; Lee, Paul Hong; Chan, Henry Ho Lung
2017-09-01
People in Hong Kong generally live in a densely populated area and their homes are smaller compared with most other cities worldwide. Interestingly, East Asian cities with high population densities seem to have higher myopia prevalence, but the association between them has not been established. This study investigated whether the crowded habitat in Hong Kong is associated with refractive error among children. In total, 1075 subjects [Mean age (S.D.): 9.95 years (0.97), 586 boys] were recruited. Information such as demographics, living environment, parental education and ocular status were collected using parental questionnaires. The ocular axial length and refractive status of all subjects were measured by qualified personnel. Ocular axial length was found to be significantly longer among those living in districts with a higher population density (F 2,1072 = 6.15, p = 0.002) and those living in a smaller home (F 2,1072 = 3.16, p = 0.04). Axial lengths were the same among different types of housing (F 3,1071 = 1.24, p = 0.29). Non-cycloplegic autorefraction suggested a more negative refractive error in those living in districts with a higher population density (F 2,1072 = 7.88, p < 0.001) and those living in a smaller home (F 2,1072 = 4.25, p = 0.02). After adjustment for other confounding covariates, the population density and home size also significantly predicted axial length and non-cycloplegic refractive error in the multiple linear regression model, while axial length and refractive error had no relationship with types of housing. Axial length in children and childhood refractive error were associated with high population density and small home size. A constricted living space may be an environmental threat for myopia development in children. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Electrogastrograms during motion sickness in fasted and fed subjects
NASA Technical Reports Server (NTRS)
Stewart, John J.; Wood, Mary J.; Wood, Charles D.
1989-01-01
Seven human volunteers were subjected to stressful Coriolis stimulation (rotating chair) either during the fasted state or following the ingestion of yogurt (6 oz). Subjects tested after yogurt reached a malaise-III (M-III) endpoint of motion sickness after significantly (p smaller than 0.01) fewer head movements than subjects tested in the fasted state. Surface electrogastrogram (EGG) recordings at M-III were similar for both dietary stats and consisted of a brief period of tachygastria, followed by a period of low-amplitude EGG waves. Ingestion of yogurt enhanced susceptibility to motion sickness but did not affect the associated pattern of EGG.
De Vries, A; Feleke, S
2008-12-01
This study assessed the accuracy of 3 methods that predict the uniform milk price in Federal Milk Marketing Order 6 (Florida). Predictions were made for 1 to 12 mo into the future. Data were from January 2003 to May 2007. The CURRENT method assumed that future uniform milk prices were equal to the last announced uniform milk price. The F+BASIS and F+UTIL methods were based on the milk futures markets because the futures prices reflect the market's expectation of the class III and class IV cash prices that are announced monthly by USDA. The F+BASIS method added an exponentially weighted moving average of the difference between the class III cash price and the historical uniform milk price (also known as basis) to the class III futures price. The F+UTIL method used the class III and class IV futures prices, the most recently announced butter price, and historical utilizations to predict the skim milk prices, butterfat prices, and utilizations in all 4 classes. Predictions of future utilizations were made with a Holt-Winters smoothing method. Federal Milk Marketing Order 6 had high class I utilization (85 +/- 4.8%). Mean and standard deviation of the class III and class IV cash prices were $13.39 +/- 2.40/cwt (1 cwt = 45.36 kg) and $12.06 +/- 1.80/cwt, respectively. The actual uniform price in Tampa, Florida, was $16.62 +/- 2.16/cwt. The basis was $3.23 +/- 1.23/cwt. The F+BASIS and F+UTIL predictions were generally too low during the period considered because the class III cash prices were greater than the corresponding class III futures prices. For the 1- to 6-mo-ahead predictions, the root of the mean squared prediction errors from the F+BASIS method were $1.12, $1.20, $1.55, $1.91, $2.16, and $2.34/cwt, respectively. The root of the mean squared prediction errors ranged from $2.50 to $2.73/cwt for predictions up to 12 mo ahead. Results from the F+UTIL method were similar. The accuracies of the F+BASIS and F+UTIL methods for all 12 fore-cast horizons were not significantly different. Application of the modified Mariano-Diebold tests showed that no method included all the information contained in the other methods. In conclusion, both F+BASIS and F+UTIL methods tended to more accurately predict the future uniform milk prices than the CURRENT method, but prediction errors could be substantial even a few months into the future. The majority of the prediction error was caused by the inefficiency of the futures markets to predict the class III cash prices.
TOPEX/POSEIDON orbit maintenance maneuver design
NASA Technical Reports Server (NTRS)
Bhat, R. S.; Frauenholz, R. B.; Cannell, Patrick E.
1990-01-01
The Ocean Topography Experiment (TOPEX/POSEIDON) mission orbit requirements are outlined, as well as its control and maneuver spacing requirements including longitude and time targeting. A ground-track prediction model dealing with geopotential, luni-solar gravity, and atmospheric-drag perturbations is considered. Targeting with all modeled perturbations is discussed, and such ground-track prediction errors as initial semimajor axis, orbit-determination, maneuver-execution, and atmospheric-density modeling errors are assessed. A longitude targeting strategy for two extreme situations is investigated employing all modeled perturbations and prediction errors. It is concluded that atmospheric-drag modeling errors are the prevailing ground-track prediction error source early in the mission during high solar flux, and that low solar-flux levels expected late in the experiment stipulate smaller maneuver magnitudes.
No evidence for [O III] variability in Mrk 142
NASA Astrophysics Data System (ADS)
Barth, Aaron J.; Bentz, Misty C.
2016-05-01
Using archival data from the 2008 Lick AGN Monitoring Project, Zhang & Feng claimed to find evidence for flux variations in the narrow [O III] emission of the Seyfert 1 galaxy Mrk 142 over a two-month time span. If correct, this would imply a surprisingly compact size for the narrow-line region. We show that the claimed [O III] variations are merely the result of random errors in the overall flux calibration of the spectra. The data do not provide any support for the hypothesis that the [O III] flux was variable during the 2008 monitoring period.
Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors
Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.
2013-01-01
The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Kocoulin, Valerii; Nefediev, Yurii
2016-07-01
In the Kazan University computer simulation is carried out for observation of lunar physical libration in projects planned installation of measuring equipment on the lunar surface. One such project is the project of ILOM (Japan), in which on the lunar pole an optical telescope with CCD will be equipped. As a result, the determining the selenographic coordinates (x and y) of a star with an accuracy of 1 ms of arc will be achieved. On the basis of the analytical theory of physical libration we developed a technique for solving the inverse problem of the libration. And we have already shown, for example, that the error in determining selenographic coordinates about ɛ seconds does not lead to errors in the determination of the libration angles ρ and Iσ larger than the 1.414ɛ. Libration in longitude is not determined from observations of the polar star (Petrova et al., 2012). The accuracy of the libration in the inverse problem depends on accuracy of the coordinates of the stars - α and δ - taken from the star catalogs. Checking this influence is the task of the present study. To do simulation we have developed that allows to choose the stars, falling in the field of view of the lunar telescope on observation period. Equatorial coordinates of stars were chosen by us from several fundamental catalogs: UCAC2-BSS, Hipparcos, Tycho, FK6 (part I, III) and the Astronomical Almanac. An analysis of these catalogues from the point of view accuracy of coordinates of stars represented in them was performed by Nefediev et al., 2013. The largest error, 20-70 ms, found in the catalogues UCAC2 and Tycho, the others have an error about a millisecond of arc. We simulated the observations with mentioned errors and got the following results. 1. The error in the declination Δδ of the star causes the same order error in libration parameters ρ and Iσ , while the sensitivity of libration to errors in Δα is ten time smaller. Fortunately, due to statistics (30 to 70, depending on the time of observation), this error is reduced by an order, i.e. does not exceed the error of observation selenographic coordinates. 2. The worst thing - errors in coordinates of catalogue causes though a small but constant shift in the ρ and Iσ. So, when Δα, Δδ ˜0.01", then the shift reaches 0.0025". Moreover there is a trend, with a slight, but noticeable slope. 3. Effect of error in declination of a stars is substantially strong than the error in right ascension. Perhaps it is characteristic only for polar observations. For the required accuracy in determination of the physical libration these phenomena must be taken into account when processing the planned observations. Referencies. Nefediev et al., 2013. Uchenye zapiski Kazanskogo universiteta, v. 155, 1, p.188-194. Petrova, N., Abdulmyanov T., Hanada H. Some qualitative manifestations of the physical libration of the Moon by observing stars from the lunar surface. //J. Adv. Space Res., 2012a. V. 50, p. 1702-1711
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...
2017-02-15
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter
2017-01-01
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Flux Sampling Errors for Aircraft and Towers
NASA Technical Reports Server (NTRS)
Mahrt, Larry
1998-01-01
Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.
NASA Astrophysics Data System (ADS)
Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André
2017-10-01
The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.
Quantum error-correcting code for ternary logic
NASA Astrophysics Data System (ADS)
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-11-01
With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
On the accuracy of the Head Impact Telemetry (HIT) System used in football helmets.
Jadischke, Ron; Viano, David C; Dau, Nathan; King, Albert I; McCarthy, Joe
2013-09-03
On-field measurement of head impacts has relied on the Head Impact Telemetry (HIT) System, which uses helmet mounted accelerometers to determine linear and angular head accelerations. HIT is used in youth and collegiate football to assess the frequency and severity of helmet impacts. This paper evaluates the accuracy of HIT for individual head impacts. Most HIT validations used a medium helmet on a Hybrid III head. However, the appropriate helmet is large based on the Hybrid III head circumference (58 cm) and manufacturer's fitting instructions. An instrumented skull cap was used to measure the pressure between the head of football players (n=63) and their helmet. The average pressure with a large helmet on the Hybrid III was comparable to the average pressure from helmets used by players. A medium helmet on the Hybrid III produced average pressures greater than the 99th percentile volunteer pressure level. Linear impactor tests were conducted using a large and medium helmet on the Hybrid III. Testing was conducted by two independent laboratories. HIT data were compared to data from the Hybrid III equipped with a 3-2-2-2 accelerometer array. The absolute and root mean square error (RMSE) for HIT were computed for each impact (n=90). Fifty-five percent (n=49) had an absolute error greater than 15% while the RMSE was 59.1% for peak linear acceleration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Yoon, Je Moon; Shin, Dong Hoon; Kim, Sang Jin; Ham, Don-Il; Kang, Se Woong; Chang, Yun Sil; Park, Won Soon
2017-01-01
To investigate the anatomical and refractive outcomes in patients with Type 1 retinopathy of prematurity in Zone I. The medical records of 101 eyes of 51 consecutive infants with Type 1 retinopathy of prematurity in Zone I were analyzed. Infants were treated by conventional laser photocoagulation (Group I), combined intravitreal bevacizumab injection and Zone I sparing laser (Group II), or intravitreal bevacizumab with deferred laser treatment (Group III). The proportion of unfavorable anatomical outcomes including retinal fold, disc dragging, retrolental tissue obscuring the view of the posterior pole, retinal detachment, and early refractive errors were compared among the three groups. The mean gestational age at birth and the birth weight of all 51 infants were 24.3 ± 1.1 weeks and 646 ± 143 g, respectively. In Group I, an unfavorable anatomical outcome was observed in 10 of 44 eyes (22.7%). In contrast, in Groups II and III, all eyes showed favorable anatomical outcomes without reactivation or retreatment. The refractive error was less myopic in Group III than in Groups I and II (spherical equivalent of -4.62 ± 4.00 D in Group I, -5.53 ± 2.21 D in Group II, and -1.40 ± 2.19 D in Group III; P < 0.001). In Type 1 retinopathy of prematurity in Zone I, intravitreal bevacizumab with concomitant or deferred laser therapy yielded a better anatomical outcome than conventional laser therapy alone. Moreover, intravitreal bevacizumab with deferred laser treatment resulted in less myopic refractive error.
Bartel, Thomas W.; Yaniv, Simone L.
1997-01-01
The 60 min creep data from National Type Evaluation Procedure (NTEP) tests performed at the National Institute of Standards and Technology (NIST) on 65 load cells have been analyzed in order to compare their creep and creep recovery responses, and to compare the 60 min creep with creep over shorter time periods. To facilitate this comparison the data were fitted to a multiple-term exponential equation, which adequately describes the creep and creep recovery responses of load cells. The use of such a curve fit reduces the effect of the random error in the indicator readings on the calculated values of the load cell creep. Examination of the fitted curves show that the creep recovery responses, after inversion by a change in sign, are generally similar in shape to the creep response, but smaller in magnitude. The average ratio of the absolute value of the maximum creep recovery to the maximum creep is 0.86; however, no reliable correlation between creep and creep recovery can be drawn from the data. The fitted curves were also used to compare the 60 min creep of the NTEP analysis with the 30 min creep and other parameters calculated according to the Organization Internationale de Métrologie Légale (OIML) R 60 analysis. The average ratio of the 30 min creep value to the 60 min value is 0.84. The OIML class C creep tolerance is less than 0.5 of the NTEP tolerance for classes III and III L. PMID:27805151
NASA Astrophysics Data System (ADS)
Galavís, M. E.; Mendoza, C.; Zeippen, C. J.
1998-12-01
Since te[Burgess et al. (1997)]{bur97} have recently questioned the accuracy of the effective collision strength calculated in the IRON Project for the electron impact excitation of the 3ssp23p sp4 \\ sp1 D -sp1 S quadrupole transition in Ar iii, an extended R-matrix calculation has been performed for this transition. The original 24-state target model was maintained, but the energy regime was increased to 100 Ryd. It is shown that in order to ensure convergence of the partial wave expansion at such energies, it is necessary to take into account partial collision strengths up to L=30 and to ``top-up'' with a geometric series procedure. By comparing effective collision strengths, it is found that the differences from the original calculation are not greater than 25% around the upper end of the common temperature range and that they are much smaller than 20% over most of it. This is consistent with the accuracy rating (20%) previously assigned to transitions in this low ionisation system. Also the present high-temperature limit agrees fairly well (15%) with the Coulomb-Born limit estimated by Burgess et al., thus confirming our previous accuracy rating. It appears that Burgess et al., in their data assessment, have overextended the low-energy behaviour of our reduced effective collision strength to obtain an extrapolated high-temperature limit that appeared to be in error by a factor of 2.
ERIC Educational Resources Information Center
Hopwood, Christopher J.; Richard, David C. S.
2005-01-01
Research on the Wechsler Adult Intelligence Scale-Revised and Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) suggests that practicing clinical psychologists and graduate students make item-level scoring errors that affect IQ, index, and subtest scores. Studies have been limited in that Full-Scale IQ (FSIQ) and examiner administration,…
NASA Technical Reports Server (NTRS)
Wyman, D.; Steinman, R. M.
1973-01-01
Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.
NASA Technical Reports Server (NTRS)
Sun, W.; Loeb, N. G.; Videen, G.; Fu, Q.
2004-01-01
Natural particles such as ice crystals in cirrus clouds generally are not pristine but have additional micro-roughness on their surfaces. A two-dimensional finite-difference time-domain (FDTD) program with a perfectly matched layer absorbing boundary condition is developed to calculate the effect of surface roughness on light scattering by long ice columns. When we use a spatial cell size of 1/120 incident wavelength for ice circular cylinders with size parameters of 6 and 24 at wavelengths of 0.55 and 10.8 mum, respectively, the errors in the FDTD results in the extinction, scattering, and absorption efficiencies are smaller than similar to 0.5%. The errors in the FDTD results in the asymmetry factor are smaller than similar to 0.05%. The errors in the FDTD results in the phase-matrix elements are smaller than similar to 5%. By adding a pseudorandom change as great as 10% of the radius of a cylinder, we calculate the scattering properties of randomly oriented rough-surfaced ice columns. We conclude that, although the effect of small surface roughness on light scattering is negligible, the scattering phase-matrix elements change significantly for particles with large surface roughness. The roughness on the particle surface can make the conventional phase function smooth. The most significant effect of the surface roughness is the decay of polarization of the scattered light.
Benau, Erik M; Moelter, Stephen T
2016-09-01
The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Error in the Honeybee Waggle Dance Improves Foraging Flexibility
Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro
2014-01-01
The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10–15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0–5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10–15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off. PMID:24569525
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung; Gardner, Chester S.
1989-01-01
Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.
Oral findings in patients with mucolipidosis type III.
Cavalcante, Weber Céo; Santos, Luciano Cincurá Silva; Dos Santos, Josiane Nascimento; de Vasconcellos, Sara Juliana de Abreu; de Azevedo, Roberto Almeida; Dos Santos, Jean Nunes
2012-01-01
Mucolipidosis type III is a rare, autosomal recessive disorder, which is part of a group of storage diseases as a result of inborn error of lysosomal enzyme metabolism. It is characterized by the gradual onset of signs and symptoms affecting the physical and mental development as well as visual changes, heart, skeletal and joint. Although oral findings associated with mucolipidosis type II have been extensively reported, there is a shortage of information on mucolipidosis type III. This paper presents radiological and histological findings of multiple radiolucent lesions associated with impacted teeth in the jaw of a 16 year-old youngster with mucolipidosis type III.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Impacts of motivational valence on the error-related negativity elicited by full and partial errors.
Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki
2016-02-01
Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Synthesis and characterization of Fe(III)-silicate precipitation tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parmar, K.; Pramanik, A.K.; Bandyopadhya, N.R.
2010-09-15
Fe(III)-silicate precipitation tubes synthesized through 'silica garden' route have been characterized using a number of analytical techniques including X-ray diffraction, infrared spectroscopy, atomic force microscopy, scanning and transmission electron microscopy. These tubes are brittle and amorphous and are hierarchically built from smaller tubes of 5-10 nm diameters. They remain amorphous at least up to 650 {sup o}C. Crystobalite and hematite are the major phases present in Fe(III)-silicate tubes heated at 850 {sup o}C. Morphology and chemical compositions at the external and internal walls of these tubes are remarkably different. These tubes are porous with high BET surface area of 291.2more » m{sup 2}/g. Fe(III)-silicate tubes contain significant amount of physically and chemically bound moisture. They show promise as an adsorbent for Pb(II), Zn(II), and Cr(III) in aqueous medium.« less
NASA Technical Reports Server (NTRS)
Antonille, Scott
2004-01-01
For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.
Tang, Jing; Thorhauer, Eric; Marsh, Chelsea; Fu, Freddie H.
2013-01-01
Purpose Femoral tunnel angle (FTA) has been proposed as a metric for evaluating whether ACL reconstruction was performed anatomically. In clinic, radiographic images are typically acquired with an uncertain amount of internal/external knee rotation. The extent to which knee rotation will influence FTA measurement is unclear. Furthermore, differences in FTA measurement between the two common positions (0° and 45° knee flexion) have not been established. The purpose of this study was to investigate the influence of knee rotation on FTA measurement after ACL reconstruction. Methods Knee CT data from 16 subjects were segmented to produce 3D bone models. Central axes of tunnels were identified. The 0° and 45° flexion angles were simulated. Knee internal/external rotations were simulated in a range of ±20°. FTA was defined as the angle between the tunnel axis and femoral shaft axis, orthogonally projected into the coronal plane. Results Femoral tunnel angle was positively/negatively correlated with knee rotation angle at 0°/45° knee flexion. At 0° knee flexion, FTA for anterio-medial (AM) tunnels was significantly decreased at 20° of external knee rotation. At 45° knee flexion, more than 16° external or 19° internal rotation significantly altered FTA measurements for single-bundle tunnels; smaller rotations (±9° for AM, ±5° for PL) created significant errors in FTA measurements after double-bundle reconstruction. Conclusion Femoral tunnel angle measurements were correlated with knee rotation. Relatively small imaging malalignment introduced significant errors with knee flexed 45°. This study supports using the 0° flexion position for knee radiographs to reduce errors in FTA measurement due to knee internal/external rotation. Level of evidence Case–control study, Level III. PMID:23589127
Nakano, Shogo; Yoshida, Miwa; Fujii, Kimihito; Yorozuya, Kyoko; Kousaka, Junko; Mouri, Yukako; Fukutomi, Takashi; Ohshima, Yukihiko; Kimura, Junko; Ishiguchi, Tsuneo
2012-01-01
This study verified that recently developed real-time virtual sonography (RVS) to coordinate a sonography image and the magnetic resonance imaging (MRI) multiplanar reconstruction (MPR) with magnetic navigation was useful. The purpose of this study was to evaluate the accuracy of RVS to sonographically identify enhancing lesions by breast MRI. Between December 2008 and May 2009, RVS was performed in 51 consecutive patients with 63 enhancing lesions. MRI was performed with the patients in the supine position using a 1.5-T imager with a body surface coil to achieve the same position as with sonography. To assess the accuracy of the RVS, the following three issues were analyzed: (i) The sonographic detection rate of enhancing lesions, (ii) the comparison of the tumor size measured by sonography and the MRI-MPR and (iii) the positioning errors as the distance from the actual sonographic position to the expected MRI position in 3-D. Among the 63 enhancing lesions, 42 (67%) lesions were identified by conventional B-mode, whereas the remaining 21 (33%) initial conventional B-mode occult lesions were identified by RVS alone. The sonographic size of the lesions detected by RVS alone was significantly smaller than that of lesions detected by conventional B-mode (p < 0.001). The mean tumor size provided by RVS was 12.3 mm for real-time sonography and 14.1 mm for MRI-MPR (r = 0.848, p < 0.001). The mean positioning errors for the transverse and sagittal planes and the depth from the skin were 7.7, 6.9 and 2.8 mm, respectively. The overall mean 3D positioning error was 12.0 mm. Our results suggest that RVS has good targeting accuracy to directly compare a sonographic image with MRI results without operator dependence. Copyright © 2012 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Hidetoshi, E-mail: hsuzuki@cc.miyazaki-u.ac.jp; Nakata, Yuka; Takahasi, Masamitu
2016-03-15
The formation and evolution of rotational twin (TW) domains introduced by a stacking fault during molecular-beam epitaxial growth of GaAs on Si (111) substrates were studied by in situ x-ray diffraction. To modify the volume ratio of TW to total GaAs domains, GaAs was deposited under high and low group V/group III (V/III) flux ratios. For low V/III, there was less nucleation of TW than normal growth (NG) domains, although the NG and TW growth rates were similar. For high V/III, the NG and TW growth rates varied until a few GaAs monolayers were deposited; the mean TW domain sizemore » was smaller for all film thicknesses.« less
Model for threading dislocations in metamorphic tandem solar cells on GaAs (001) substrates
NASA Astrophysics Data System (ADS)
Song, Yifei; Kujofsa, Tedi; Ayers, John E.
2018-02-01
We present an approximate model for the threading dislocations in III-V heterostructures and have applied this model to study the defect behavior in metamorphic triple-junction solar cells. This model represents a new approach in which the coefficient for second-order threading dislocation annihilation and coalescence reactions is considered to be determined by the length of misfit dislocations, LMD, in the structure, and we therefore refer to it as the LMD model. On the basis of this model we have compared the average threading dislocation densities in the active layers of triple junction solar cells using linearly-graded buffers of varying thicknesses as well as S-graded (complementary error function) buffers with varying thicknesses and standard deviation parameters. We have shown that the threading dislocation densities in the active regions of metamorphic tandem solar cells depend not only on the thicknesses of the buffer layers but on their compositional grading profiles. The use of S-graded buffer layers instead of linear buffers resulted in lower threading dislocation densities. Moreover, the threading dislocation densities depended strongly on the standard deviation parameters used in the S-graded buffers, with smaller values providing lower threading dislocation densities.
Rock Fracture Toughness Study Under Mixed Mode I/III Loading
NASA Astrophysics Data System (ADS)
Aliha, M. R. M.; Bahmani, A.
2017-07-01
Fracture growth in underground rock structures occurs under complex stress states, which typically include the in- and out-of-plane sliding deformation of jointed rock masses before catastrophic failure. However, the lack of a comprehensive theoretical and experimental fracture toughness study for rocks under contributions of out-of plane deformations (i.e. mode III) is one of the shortcomings of this field. Therefore, in this research the mixed mode I/III fracture toughness of a typical rock material is investigated experimentally by means of a novel cracked disc specimen subjected to bend loading. It was shown that the specimen can provide full combinations of modes I and III and consequently a complete set of mixed mode I/III fracture toughness data were determined for the tested marble rock. By moving from pure mode I towards pure mode III, fracture load was increased; however, the corresponding fracture toughness value became smaller. The obtained experimental fracture toughness results were finally predicted using theoretical and empirical fracture models.
Lippert, Kai-Alexander; Mukherjee, Chandan; Broschinski, Jan-Philipp; Lippert, Yvonne; Walleck, Stephan; Stammler, Anja; Bögge, Hartmut; Schnack, Jürgen; Glaser, Thorsten
2017-12-18
Single-molecule magnets (SMMs) retain a magnetization without applied magnetic field for a decent time due to an energy barrier U for spin-reversal. Despite the success to increase U, the difficult to control magnetic quantum tunneling often leads to a decreased effective barrier U eff and a fast relaxation. Here, we demonstrate the influence of the exchange coupling on the tunneling probability in two heptanuclear SMMs hosting the same spin-system with the same high spin ground state S t = 21/2. A chirality-induced symmetry reduction leads to a switch of the Mn III -Mn III exchange from antiferromagnetic in the achiral SMM [Mn III 6 Cr III ] 3+ to ferromagnetic in the new chiral SMM RR [Mn III 6 Cr III ] 3+ . Multispin Hamiltonian analysis by full-matrix diagonalization demonstrates that the ferromagnetic interactions in RR [Mn III 6 Cr III ] 3+ enforce a well-defined S t = 21/2 ground state with substantially less mixing of M S substates in contrast to [Mn III 6 Cr III ] 3+ and no tunneling pathways below the top of the energy barrier. This is experimentally verified as U eff is smaller than the calculated energy barrier U in [Mn III 6 Cr III ] 3+ due to tunneling pathways, whereas U eff equals U in RR [Mn III 6 Cr III ] 3+ demonstrating the absence of quantum tunneling.
Eu(III) uptake on rectorite in the presence of humic acid: a macroscopic and spectroscopic study.
Chen, Changlun; Yang, Xin; Wei, Juan; Tan, Xiaoli; Wang, Xiangke
2013-03-01
This work contributed to the comprehension of humic acid (HA) effect on Eu(III) uptake to Na-rectorite by batch sorption experiments, model fitting, scanning electron microscopy, powder X-ray diffraction, Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and extended X-ray absorption fine structure (EXAFS) spectroscopy. At low pH, the presence of HA enhanced Eu(III) sorption on Na-rectorite, while reduced Eu(III) sorption at high pH. The experimental data of Eu(III) sorption in the absence and presence of HA were simulated by the diffuse-layer model well with the aid of FITEQL 3.2 software. The basal spacing of rectorite became large after Eu(III) and HA sorption on Na-rectorite. Some of Eu(III) ions and HA might be intercalated into the interlayer space of Na-rectorite. EXAFS analysis showed that the R(Eu-O) (the bond distance of Eu and O in the first shell of Eu) and N values (coordination number) of Eu(III)-HA-rectorite system were smaller than those of Eu(III)-rectorite system. Copyright © 2012 Elsevier Inc. All rights reserved.
Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision
Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.
2018-01-01
Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945
Omidfar, K; Rasaee, M J; Modjtahedi, H; Forouzandeh, M; Taghikhani, M; Bakhtiari, A; Paknejad, M; Kashanian, S
2004-01-01
EGFRvIII is the type III deletion mutant form of the epidermal growth factor receptor (EGFR) with transforming activity. This tumor-specific antigen is ligand independent, contains a constitutively active tyrosine kinase domain and has been shown to be present in a number of human malignancies. In this study, we report the production and characterization of camel antibodies that are directed against the external domain of the EGFRvIII. Antibodies developed in camels are smaller (i.e. IgG2 and IgG3 subclasses lack light chains) than any other conventional mammalian antibodies. This property of camel antibodies makes them ideal tools for basic research and other applications such as tumor imaging and cancer therapy. In the present study, camel antibodies were generated by immunization of camelids (Camelus bactrianus and Camelus dromedarius) with a synthetic 14-amino acid peptide corresponding to the mutated sequence of the EGFR, tissue homogenates of several patients with human glioblastoma, medulloblastoma and aggressive breast carcinoma, as well as EGFR-expressing cell lines. Three subclasses of camel IgG [conventional (IgG1, 160 kD) and heavy chain-only antibodies (IgG2 and IgG3, 90 kD)] were separated by their different binding properties to protein A and protein G affinity columns. The anti-EGFRvIII peptide antibodies from immunized camels were purified further using the EGFRvIII synthetic peptide affinity column. The purified anti-EGFRvIII peptide camel antibodies selectively bound to the EGFRvIII peptide and affinity-purified EGFRvIII from malignant tissues and detected a protein band of 140 kD from malignant tissues by Western blot. Affinity analysis showed that the antibodies from C. bactrianus and C. dromedarius reacted with peptide and antigen purified from a small cell lung cancer ascitic fluid with affinities of 2 x 10(8) and 5 x 10(7)M(-1) to the same extent, respectively. Since the functional antigen-binding domain of the anti-EGFRvIII antibodies in camels is much simpler and located only on the heavy chains of proteins, we are currently developing recombinant and smaller versions of the variable domain of these naturally occurring heavy-chain antibodies (V(HH)) for use in tumor imaging and cancer therapy.
Mantzari, Eleni; Hollands, Gareth J; Pechey, Rachel; Jebb, Susan; Marteau, Theresa M
2018-01-01
Sugar-sweetened beverage (SSB) consumption increases obesity risk and is linked to adverse health consequences. Large packages increase food consumption, but most evidence comes from studies comparing larger with standard packages, resulting in uncertainty regarding the impact of smaller packages. There is also little research on beverages. This qualitative study explores the experiences of consuming cola from smaller compared with larger bottles, to inform intervention strategies. Sixteen households in Cambridge, England, participating in a feasibility study assessing the impact of bottle size on in-home SSB consumption, received a set amount of cola each week for four weeks in one of four bottle sizes: 1500 ml, 1000 ml, 500 ml, or 250 ml, in random order. At the study end, household representatives were interviewed about their experiences of using each bottle, including perceptions of i) consumption level; ii) consumption-related behaviours; and iii) factors affecting consumption. Interviews were semi-structured and data analysed using the Framework approach. The present analysis focuses specifically on experiences relating to use of the smaller bottles. The smallest bottles were described as increasing drinking occasion frequency and encouraging consumption of numerous bottles in succession. Factors described as facilitating their consumption were: i) convenience and portability; ii) greater numbers of bottles available, which hindered consumption monitoring and control; iii) perceived insufficient quantity per bottle; and iv) positive attitudes. In a minority of cases the smallest bottles were perceived to have reduced consumption, but this was related to practical issues with the bottles that resulted in dislike. The perception of greater consumption and qualitative reports of drinking habits associated with the smallest bottles raise the possibility that the 'portion size effect' has a lower threshold, beyond which smaller portions and packages may increase consumption. This reinforces the need for empirical evidence to assess the in-home impact of smaller bottles on SSB consumption. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Support of Mark III Optical Interferometer
1988-11-01
error, and low visibility* pedestal, and the surface of a zerodur sphere attached to the mirror errors are not entirely consistent. as shown in Fig. 7...of’ stellar usually associated with the primary mirror of a large astronomical interferometers at Mt. Wilson Observatory. The first instrument...the two siderostats is directed toward the central building by fixed mirrors . These fixed mirrors are necessary to keep the polarization - vectors
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-15
... Change To Correct a Typographical Error in Exchange Rule 1080 August 9, 2011. Pursuant to Section 19(b)(1... Rule 1080 (Phlx XL and XL II) to correct a typographical error. The text of the proposed rule change is... in subsection (m)(iii)(D) of Rule 1080. On July 13, 2011, the Exchange filed an immediately effective...
Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.
2005-01-01
Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan
2013-02-01
In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).
Gómez-Cabello, Alba; Vicente-Rodríguez, Germán; Albers, Ulrike; Mata, Esmeralda; Rodriguez-Marroyo, Jose A.; Olivares, Pedro R.; Gusi, Narcis; Villa, Gerardo; Aznar, Susana; Gonzalez-Gross, Marcela; Casajús, Jose A.; Ara, Ignacio
2012-01-01
Background The elderly EXERNET multi-centre study aims to collect normative anthropometric data for old functionally independent adults living in Spain. Purpose To describe the standardization process and reliability of the anthropometric measurements carried out in the pilot study and during the final workshop, examining both intra- and inter-rater errors for measurements. Materials and Methods A total of 98 elderly from five different regions participated in the intra-rater error assessment, and 10 different seniors living in the city of Toledo (Spain) participated in the inter-rater assessment. We examined both intra- and inter-rater errors for heights and circumferences. Results For height, intra-rater technical errors of measurement (TEMs) were smaller than 0.25 cm. For circumferences and knee height, TEMs were smaller than 1 cm, except for waist circumference in the city of Cáceres. Reliability for heights and circumferences was greater than 98% in all cases. Inter-rater TEMs were 0.61 cm for height, 0.75 cm for knee-height and ranged between 2.70 and 3.09 cm for the circumferences measured. Inter-rater reliabilities for anthropometric measurements were always higher than 90%. Conclusion The harmonization process, including the workshop and pilot study, guarantee the quality of the anthropometric measurements in the elderly EXERNET multi-centre study. High reliability and low TEM may be expected when assessing anthropometry in elderly population. PMID:22860013
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
NASA Astrophysics Data System (ADS)
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The proposed Global Precipitation Mission (GPM) builds on the success of the Tropical Rainfall Measuring Mission (TRMM), offering a constellation of microwave-sensor-equipped smaller satellites in addition to a larger, multiply-instrumented "mother" satellite that will include an improved precipitation radar system to which the precipitation estimates of the smaller satellites can be tuned. Coverage by the satellites will be nearly global rather than being confined as TRMM was to lower latitudes. It is hoped that the satellite constellation can provide observations at most places on the earth at least once every three hours, though practical considerations may force some compromises. The GPM system offers the possibility of providing precipitation maps with much better time resolution than the monthly averages around which TRMM was planned, and therefore opens up new possibilities for hydrology and data assimilation into models. In this talk, methods that were developed for estimating sampling error in the rainfall averages that TRMM is providing will be used to estimate sampling error levels for GPM-era configurations. Possible impacts on GPM products of compromises in the sampling frequency will be discussed.
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
Tung, Li-Chen; Yu, Wan-Hui; Lin, Gong-Hong; Yu, Tzu-Ying; Wu, Chien-Te; Tsai, Chia-Yin; Chou, Willy; Chen, Mei-Hsiang; Hsieh, Ching-Lin
2016-09-01
To develop a Tablet-based Symbol Digit Modalities Test (T-SDMT) and to examine the test-retest reliability and concurrent validity of the T-SDMT in patients with stroke. The study had two phases. In the first phase, six experts, nine college students and five outpatients participated in the development and testing of the T-SDMT. In the second phase, 52 outpatients were evaluated twice (2 weeks apart) with the T-SDMT and SDMT to examine the test-retest reliability and concurrent validity of the T-SDMT. The T-SDMT was developed via expert input and college student/patient feedback. Regarding test-retest reliability, the practise effects of the T-SDMT and SDMT were both trivial (d=0.12) but significant (p≦0.015). The improvement in the T-SDMT (4.7%) was smaller than that in the SDMT (5.6%). The minimal detectable changes (MDC%) of the T-SDMT and SDMT were 6.7 (22.8%) and 10.3 (32.8%), respectively. The T-SDMT and SDMT were highly correlated with each other at the two time points (Pearson's r=0.90-0.91). The T-SDMT demonstrated good concurrent validity with the SDMT. Because the T-SDMT had a smaller practise effect and less random measurement error (superior test-retest reliability), it is recommended over the SDMT for assessing information processing speed in patients with stroke. Implications for Rehabilitation The Symbol Digit Modalities Test (SDMT), a common measure of information processing speed, showed a substantial practise effect and considerable random measurement error in patients with stroke. The Tablet-based SDMT (T-SDMT) has been developed to reduce the practise effect and random measurement error of the SDMT in patients with stroke. The T-SDMT had smaller practise effect and random measurement error than the SDMT, which can provide more reliable assessments of information processing speed.
Department of Defense General/Flag Officer Worldwide Roster
1993-12-01
HOOD,TX MAHAN CHARLES S JR BG USA 9306 930801 III CORPS ARTILLERY - FORT SILL,OK COMMANDING GENERAL VON KAENEL HOWARD J BG USA 9206 920601 1ST CAVALRY...2 MAHAN CHARLES S JR .............. 14 MCGINLEY EDWARD S II ............ 37 MAHER JOHN J III ................ 14 MCGINN DENNIS V... s ). Any errors should be brought to the attention of the particular Military Department involved with the incumbent. Other comments on this
Algorithm 699 - A new representation of Patterson's quadrature formulae
NASA Technical Reports Server (NTRS)
Krogh, Fred T.; Van Snyder, W.
1991-01-01
A method is presented to reduce the number of coefficients necessary to represent Patterson's quadrature formulae. It also reduces the amount of storage necessary for storing function values, and produces slightly smaller error in evaluating the formulae.
Eye size and shape in newborn children and their relation to axial length and refraction at 3 years.
Lim, Laurence Shen; Chua, Sharon; Tan, Pei Ting; Cai, Shirong; Chong, Yap-Seng; Kwek, Kenneth; Gluckman, Peter D; Fortier, Marielle V; Ngo, Cheryl; Qiu, Anqi; Saw, Seang-Mei
2015-07-01
To determine if eye size and shape at birth are associated with eye size and refractive error 3 years later. A subset of 173 full-term newborn infants from the Growing Up in Singapore Towards healthy Outcomes (GUSTO) birth cohort underwent magnetic resonance imaging (MRI) to measure the dimensions of the internal eye. Eye shape was assessed by an oblateness index, calculated as 1 - (axial length/width) or 1 - (axial length/height). Cycloplegic autorefraction (Canon Autorefractor RK-F1) and optical biometry (IOLMaster) were performed 3 years later. Both eyes of 173 children were analysed. Eyes with longer axial length at birth had smaller increases in axial length at 3 years (p < 0.001). Eyes with larger baseline volumes and surface areas had smaller increases in axial length at 3 years (p < 0.001 for both). Eyes which were more oblate at birth had greater increases in axial length at 3 years (p < 0.001). Using width to calculate oblateness, prolate eyes had smaller increases in axial length at 3 years compared to oblate eyes (p < 0.001), and, using height, prolate and spherical eyes had smaller increases in axial length at 3 years compared to oblate eyes (p < 0.001 for both). There were no associations between eye size and shape at birth and refraction, corneal curvature or myopia at 3 years. Eyes that are larger and have prolate or spherical shapes at birth exhibit smaller increases in axial length over the first 3 years of life. Eye size and shape at birth influence subsequent eye growth but not refractive error development. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Leveraging pattern matching to solve SRAM verification challenges at advanced nodes
NASA Astrophysics Data System (ADS)
Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan
2018-03-01
Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.
Sensitivity to prediction error in reach adaptation
Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza
2012-01-01
It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782
Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G
2007-10-01
Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.
NASA Astrophysics Data System (ADS)
Wang, Hongcui; Kawahara, Tatsuya
CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.
Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
NASA Astrophysics Data System (ADS)
Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo
2018-06-01
Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.
Kim, Matthew H.; Marulis, Loren M.; Grammer, Jennie K.; Morrison, Frederick J.; Gehring, William J.
2016-01-01
Motivational beliefs and values influence how children approach challenging activities. The present study explores motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two ERP components, the error-related negativity (ERN) and error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 four- to six-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, while stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. PMID:27898304
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürsoy, Doğa; Hong, Young P.; He, Kuan
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Engineered nanoparticles (NPs) (particle sizes ranging from 1-100 nm) have unique physical and chemical properties that differ fundamentally from their macro-sized counterparts. In addition to their smaller particle size, nanoparticles possess unique characteristics such as larg...
46 CFR 12.602 - Basic training.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Swimming while wearing a lifejacket. (v) Keeping afloat without a lifejacket. (2) Fire prevention and... extinguishers. (ii) Extinguishing smaller fires. e.g., electrical fires, oil fires, and propane fires. (iii... firefighting agent in an accommodation room or simulated engine room with fire and heavy smoke. (vii...
Stone, A M; Bushnell, W; Denne, J; Sargent, D J; Amit, O; Chen, C; Bailey-Iacona, R; Helterbrand, J; Williams, G
2011-08-01
Progression free survival (PFS) is increasingly used as a primary end-point in oncology clinical trials. This paper provides recommendations for optimal trial design, conduct and analysis in situations where PFS has the potential to be an acceptable end-point for regulatory approval. These recommendations are based on research performed by the Pharmaceutical Research and Manufacturers Association (PhRMA) sponsored PFS Working Group, including the re-analysis of 28 randomised Phase III trials from 12 companies/institutions. (1) In the assessment of PFS, there is a critical distinction between measurement error that results from random variation, which by itself tends to attenuate treatment effect, versus bias which increases the probability of a false negative or false positive finding. Investigator bias can be detected by auditing a random sample of patients by blinded, independent, central review (BICR). (2) ITT analyses generally resulted in smaller treatment effects (HRs closer to 1) than analyses that censor patients for potentially informative events (such as starting other anti-cancer therapy). (3) Interval censored analyses (ICA) are more robust to time-evaluation bias than the log-rank test. A sample based BICR audit may be employed in open or partially blinded trials and should not be required in true double-blind trials. Patients should be followed until progression even if they have discontinued treatment to be consistent with the ITT principle. ICAs should be a standard sensitivity analysis to assess time-evaluation bias. Implementation of these recommendations would standardize and in many cases simplify phase III oncology clinical trials that use a PFS primary end-point. Copyright © 2011 Elsevier Ltd. All rights reserved.
How do you design randomised trials for smaller populations? A framework.
Parmar, Mahesh K B; Sydes, Matthew R; Morris, Tim P
2016-11-25
How should we approach trial design when we can get some, but not all, of the way to the numbers required for a randomised phase III trial?We present an ordered framework for designing randomised trials to address the problem when the ideal sample size is considered larger than the number of participants that can be recruited in a reasonable time frame. Staying with the frequentist approach that is well accepted and understood in large trials, we propose a framework that includes small alterations to the design parameters. These aim to increase the numbers achievable and also potentially reduce the sample size target. The first step should always be to attempt to extend collaborations, consider broadening eligibility criteria and increase the accrual time or follow-up time. The second set of ordered considerations are the choice of research arm, outcome measures, power and target effect. If the revised design is still not feasible, in the third step we propose moving from two- to one-sided significance tests, changing the type I error rate, using covariate information at the design stage, re-randomising patients and borrowing external information.We discuss the benefits of some of these possible changes and warn against others. We illustrate, with a worked example based on the Euramos-1 trial, the application of this framework in designing a trial that is feasible, while still providing a good evidence base to evaluate a research treatment.This framework would allow appropriate evaluation of treatments when large-scale phase III trials are not possible, but where the need for high-quality randomised data is as pressing as it is for common diseases.
Fat and Sugar Metabolism During Exercise in Patients With Metabolic Myopathy
2017-08-31
Metabolism, Inborn Errors; Lipid Metabolism, Inborn Errors; Carbohydrate Metabolism, Inborn Errors; Long-Chain 3-Hydroxyacyl-CoA Dehydrogenase Deficiency; Glycogenin-1 Deficiency (Glycogen Storage Disease Type XV); Carnitine Palmitoyl Transferase 2 Deficiency; VLCAD Deficiency; Medium-chain Acyl-CoA Dehydrogenase Deficiency; Multiple Acyl-CoA Dehydrogenase Deficiency; Carnitine Transporter Deficiency; Neutral Lipid Storage Disease; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Muscle Phosphofructokinase Deficiency; Phosphoglucomutase 1 Deficiency; Phosphoglycerate Mutase Deficiency; Phosphoglycerate Kinase Deficiency; Phosphorylase Kinase Deficiency; Beta Enolase Deficiency; Lactate Dehydrogenase Deficiency; Glycogen Synthase Deficiency
Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A
2018-06-01
Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation or treatment errors was found in the remaining 50 of 82 trials (61%). Based on responses from 9 of the 15 corresponding authors who were contacted regarding recruitment, randomisation and treatment errors, between 1% and 100% of the errors that occurred in their trials were reported in the trial publications. Conclusion Recruitment, randomisation and treatment errors are common in individually randomised, phase III trials published in leading medical journals, but reporting practices are inadequate and reporting standards are needed. We recommend researchers report all such errors that occurred during the trial and describe how they were handled in trial publications to improve transparency in reporting of clinical trials.
Validation of Ozone Profiles Retrieved from SAGE III Limb Scatter Measurements
NASA Technical Reports Server (NTRS)
Rault, Didier F.; Taha, Ghassan
2007-01-01
Ozone profiles retrieved from Stratospheric Aerosol and Gas Experiment (SAGE III) limb scatter measurements are compared with correlative measurements made by occultation instruments (SAGE II, SAGE III and HALOE [Halogen Occultation Experiment]), a limb scatter instrument (Optical Spectrograph and InfraRed Imager System [OSIRIS]) and a series of ozonesondes and lidars, in order to ascertain the accuracy and precision of the SAGE III instrument in limb scatter mode. The measurement relative accuracy is found to be 5-10% from the tropopause to about 45km whereas the relative precision is found to be less than 10% from 20 to 38km. The main source of error is height registration uncertainty, which is found to be Gaussian with a standard deviation of about 350m.
Simms, Leonard J; Calabrese, William R
2016-02-01
Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.
Rose, Nathan S
2013-12-01
Individual differences in working memory (WM) are related to performance on secondary memory (SM), and fluid intelligence (gF) tests. However, the source of the relation remains unclear, in part because few studies have controlled for the nature of encoding; therefore, it is unclear whether individual variation is due to encoding, maintenance, or retrieval processes. In the current study, participants performed a WM task (the levels-of-processing span task; Rose, Myerson, Roediger III, & Hale, 2010) and a SM test that tested for both targets and the distracting processing words from the initial WM task. Deeper levels of processing at encoding did not benefit WM, but did benefit subsequent SM, although the amount of benefit was smaller for those with lower WM spans. This result suggests that, despite encoding cues that facilitate retrieval from SM, low spans may have engaged in shallower, maintenance-focused processing to maintain the words in WM. Low spans also recalled fewer targets, more distractors, and more extralist intrusions than high spans, although this was partially due to low spans' poorer recall of targets, which resulted in a greater number of opportunities to commit recall errors. Delayed recall of intrusions and commission of source errors (labeling targets as processing words and vice versa) were significant negative predictors of gF. These results suggest that the ability to use source information to recall relevant information and withhold recall of irrelevant information is a critical source of both individual variation in WM and the relation between WM, SM, and gF. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Management of digital eye strain.
Coles-Brennan, Chantal; Sulley, Anna; Young, Graeme
2018-05-23
Digital eye strain, an emerging public health issue, is a condition characterised by visual disturbance and/or ocular discomfort related to the use of digital devices and resulting from a range of stresses on the ocular environment. This review aims to provide an overview of the extensive literature on digital eye strain research with particular reference to the clinical management of symptoms. As many as 90 per cent of digital device users experience symptoms of digital eye strain. Many studies suggest that the following factors are associated with digital eye strain: uncorrected refractive error (including presbyopia), accommodative and vergence anomalies, altered blinking pattern (reduced rate and incomplete blinking), excessive exposure to intense light, closer working distance, and smaller font size. Since a symptom may be caused by one or more factors, a holistic approach should be adopted. The following management strategies have been suggested: (i) appropriate correction of refractive error, including astigmatism and presbyopia; (ii) management of vergence anomalies, with the aim of inducing or leaving a small amount of heterophoria (~1.5 Δ Exo); (iii) blinking exercise/training to maintain normal blinking pattern; (iv) use of lubricating eye drops (artificial tears) to help alleviate dry eye-related symptoms; (v) contact lenses with enhanced comfort, particularly at end-of-day and in challenging environments; (vi) prescription of colour filters in all vision correction options, especially blue light-absorbing filters; and (vii) management of accommodative anomalies. Prevention is the main strategy for management of digital eye strain, which involves: (i) ensuring an ergonomic work environment and practice (through patient education and the implementation of ergonomic workplace policies); and (ii) visual examination and eye care to treat visual disorders. Special consideration is needed for people at a high risk of digital eye strain, such as computer workers and contact lens wearers. © 2018 Optometry Australia.
Mirus, B.B.; Perkins, K.S.; Nimmo, J.R.; Singha, K.
2009-01-01
To understand their relation to pedogenic development, soil hydraulic properties in the Mojave Desert were investi- gated for three deposit types: (i) recently deposited sediments in an active wash, (ii) a soil of early Holocene age, and (iii) a highly developed soil of late Pleistocene age. Eff ective parameter values were estimated for a simplifi ed model based on Richards' equation using a fl ow simulator (VS2D), an inverse algorithm (UCODE-2005), and matric pressure and water content data from three ponded infi ltration experiments. The inverse problem framework was designed to account for the eff ects of subsurface lateral spreading of infi ltrated water. Although none of the inverse problems converged on a unique, best-fi t parameter set, a minimum standard error of regression was reached for each deposit type. Parameter sets from the numerous inversions that reached the minimum error were used to develop probability distribu tions for each parameter and deposit type. Electrical resistance imaging obtained for two of the three infi ltration experiments was used to independently test fl ow model performance. Simulations for the active wash and Holocene soil successfully depicted the lateral and vertical fl uxes. Simulations of the more pedogenically developed Pleistocene soil did not adequately replicate the observed fl ow processes, which would require a more complex conceptual model to include smaller scale heterogeneities. The inverse-modeling results, however, indicate that with increasing age, the steep slope of the soil water retention curve shitis toward more negative matric pressures. Assigning eff ective soil hydraulic properties based on soil age provides a promising framework for future development of regional-scale models of soil moisture dynamics in arid environments for land-management applications. ?? Soil Science Society of America.
Armstrong, Craig; Samuel, Jake; Yarlett, Andrew; Cooper, Stephen-Mark; Stembridge, Mike; Stöhr, Eric J.
2016-01-01
Increased left ventricular (LV) twist and untwisting rate (LV twist mechanics) are essential responses of the heart to exercise. However, previously a large variability in LV twist mechanics during exercise has been observed, which complicates the interpretation of results. This study aimed to determine some of the physiological sources of variability in LV twist mechanics during exercise. Sixteen healthy males (age: 22 ± 4 years, V˙O2peak: 45.5 ± 6.9 ml∙kg-1∙min-1, range of individual anaerobic threshold (IAT): 32–69% of V˙O2peak) were assessed at rest and during exercise at: i) the same relative exercise intensity, 40%peak, ii) at 2% above IAT, and, iii) at 40%peak with hypoxia (40%peak+HYP). LV volumes were not significantly different between exercise conditions (P > 0.05). However, the mean margin of error of LV twist was significantly lower (F2,47 = 2.08, P < 0.05) during 40%peak compared with IAT (3.0 vs. 4.1 degrees). Despite the same workload and similar LV volumes, hypoxia increased LV twist and untwisting rate (P < 0.05), but the mean margin of error remained similar to that during 40%peak (3.2 degrees, P > 0.05). Overall, LV twist mechanics were linearly related to rate pressure product. During exercise, the intra-individual variability of LV twist mechanics is smaller at the same relative exercise intensity compared with IAT. However, the absolute magnitude (degrees) of LV twist mechanics appears to be associated with the prevailing rate pressure product. Exercise tests that evaluate LV twist mechanics should be standardised by relative exercise intensity and rate pressure product be taken into account when interpreting results. PMID:27100099
Armstrong, Craig; Samuel, Jake; Yarlett, Andrew; Cooper, Stephen-Mark; Stembridge, Mike; Stöhr, Eric J
2016-01-01
Increased left ventricular (LV) twist and untwisting rate (LV twist mechanics) are essential responses of the heart to exercise. However, previously a large variability in LV twist mechanics during exercise has been observed, which complicates the interpretation of results. This study aimed to determine some of the physiological sources of variability in LV twist mechanics during exercise. Sixteen healthy males (age: 22 ± 4 years, [Formula: see text]O2peak: 45.5 ± 6.9 ml∙kg-1∙min-1, range of individual anaerobic threshold (IAT): 32-69% of [Formula: see text]O2peak) were assessed at rest and during exercise at: i) the same relative exercise intensity, 40%peak, ii) at 2% above IAT, and, iii) at 40%peak with hypoxia (40%peak+HYP). LV volumes were not significantly different between exercise conditions (P > 0.05). However, the mean margin of error of LV twist was significantly lower (F2,47 = 2.08, P < 0.05) during 40%peak compared with IAT (3.0 vs. 4.1 degrees). Despite the same workload and similar LV volumes, hypoxia increased LV twist and untwisting rate (P < 0.05), but the mean margin of error remained similar to that during 40%peak (3.2 degrees, P > 0.05). Overall, LV twist mechanics were linearly related to rate pressure product. During exercise, the intra-individual variability of LV twist mechanics is smaller at the same relative exercise intensity compared with IAT. However, the absolute magnitude (degrees) of LV twist mechanics appears to be associated with the prevailing rate pressure product. Exercise tests that evaluate LV twist mechanics should be standardised by relative exercise intensity and rate pressure product be taken into account when interpreting results.
Carborane Burning Rate Modifiers
1980-03-20
octyne, 1-pentyne and 5-chloro-l-pentyne) are much smaller than those of the less electrophilic acetylenes. - The values of AHt reflect the... fluorine containing derivatives (III) and (V) were prepared by re- acting (I) and (II) with excess PF5 in hexane. Thus, 0.0032 moles of the carborane
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Metrology for Industry for use in the Manufacture of Grazing Incidence Beam Line Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metz, James P.; Parks, Robert E.
2014-12-01
The goal of this SBIR was to determine the slope sensitivity of Specular Reflection Deflectometry (SRD) and whether shearing methods had the sensitivity to be able to separate errors in the test equipment from slope error in the unit under test (UUT), or mirror. After many variations of test parameters it does not appear that SRD yields results much better than 1 μ radian RMS independent of how much averaging is done. Of course, a single number slope sensitivity over the full range of spatial scales is not a very insightful number in the same sense as a single numbermore » phase or height RMS value in interferometry does not tell the full story. However, the 1 μ radian RMS number is meaningful when contrasted with a sensitivity goal of better than 0.1 μ radian RMS. Shearing is a time proven method of separating the errors in a measurement from the actual shape of a UUT. It is accomplished by taking multiple measurements while moving the UUT relative to the test instrument. This process makes it possible to separate the two errors sources but only to a sensitivity of about 1 μ radian RMS. Another aspect of our conclusions is that this limit probably holds largely independent of the spatial scale of the test equipment. In the proposal for this work it was suggested that a test screen the full size of the UUT could be used to determine the slopes on scales of maybe 0.01 to full scale of the UUT while smaller screens and shorter focal length lenses could be used to measure shorter, or smaller, patches of slope. What we failed to take into consideration was that as the scale of the test equipment got smaller so too did the optical lever arm on which the slope was calculated. Although we did not do a test with a shorter focal length lens over a smaller sample area it is hard to argue with the logic that the slope sensitivity will be about the same independent of the spatial scale of the measurement assuming the test equipment is similarly scaled. On a more positive note, SRD does appear to be a highly flexible, easy to implement, rather inexpensive test for free form optics that require a dynamic range that exceeds that of interferometry. These optics are quite often specified to have more relaxed slope errors, on the order of 1 μ radian RMS or greater. It would be shortsighted to not recognize the value of this test method in the bigger picture.« less
How good are the Garvey-Kelson predictions of nuclear masses?
NASA Astrophysics Data System (ADS)
Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.
2009-09-01
The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
NASA Astrophysics Data System (ADS)
Sakata, Shojiro; Fujisawa, Masaya
It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.
McMahon, Camilla M.; Henderson, Heather A.
2014-01-01
Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088
In-die mask registration measurement on 28nm-node and beyond
NASA Astrophysics Data System (ADS)
Chen, Shen Hung; Cheng, Yung Feng; Chen, Ming Jui
2013-09-01
As semiconductor go to smaller node, the critical dimension (CD) of process become more and more small. For lithography, RET (Resolution Enhancement Technology) applications can be used for wafer printing of smaller CD/pitch on 28nm node and beyond. SMO (Source Mask Optimization), DPT (Double Patterning Technology) and SADP (Self-Align Double Patterning) can provide lower k1 value for lithography. In another way, image placement error and overlay control also become more and more important for smaller chip size (advanced node). Mask registration (image placement error) and mask overlay are important factors to affect wafer overlay control/performance especially for DPT or SADP. In traditional method, the designed registration marks (cross type, square type) with larger CD were put into scribe-line of mask frame for registration and overlay measurement. However, these patterns are far way from real patterns. It does not show the registration of real pattern directly and is not a convincing method. In this study, the in-die (in-chip) registration measurement is introduced. We extract the dummy patterns that are close to main pattern from post-OPC (Optical Proximity Correction) gds by our desired rule and choose the patterns that distribute over whole mask uniformly. The convergence test shows 100 points measurement has a reliable result.
Stuellein, Nicole; Radach, Ralph R; Jacobs, Arthur M; Hofmann, Markus J
2016-05-15
Computational models of word recognition already successfully used associative spreading from orthographic to semantic levels to account for false memories. But can they also account for semantic effects on event-related potentials in a recognition memory task? To address this question, target words in the present study had either many or few semantic associates in the stimulus set. We found larger P200 amplitudes and smaller N400 amplitudes for old words in comparison to new words. Words with many semantic associates led to larger P200 amplitudes and a smaller N400 in comparison to words with a smaller number of semantic associations. We also obtained inverted response time and accuracy effects for old and new words: faster response times and fewer errors were found for old words that had many semantic associates, whereas new words with a large number of semantic associates produced slower response times and more errors. Both behavioral and electrophysiological results indicate that semantic associations between words can facilitate top-down driven lexical access and semantic integration in recognition memory. Our results support neurophysiologically plausible predictions of the Associative Read-Out Model, which suggests top-down connections from semantic to orthographic layers. Copyright © 2016 Elsevier B.V. All rights reserved.
Solid-State Kinetic Investigations of Nonisothermal Reduction of Iron Species Supported on SBA-15
2017-01-01
Iron oxide catalysts supported on nanostructured silica SBA-15 were synthesized with various iron loadings using two different precursors. Structural characterization of the as-prepared FexOy/SBA-15 samples was performed by nitrogen physisorption, X-ray diffraction, DR-UV-Vis spectroscopy, and Mössbauer spectroscopy. An increasing size of the resulting iron species correlated with an increasing iron loading. Significantly smaller iron species were obtained from (Fe(III), NH4)-citrate precursors compared to Fe(III)-nitrate precursors. Moreover, smaller iron species resulted in a smoother surface of the support material. Temperature-programmed reduction (TPR) of the FexOy/SBA-15 samples with H2 revealed better reducibility of the samples originating from Fe(III)-nitrate precursors. Varying the iron loading led to a change in reduction mechanism. TPR traces were analyzed by model-independent Kissinger method, Ozawa, Flynn, and Wall (OFW) method, and model-dependent Coats-Redfern method. JMAK kinetic analysis afforded a one-dimensional reduction process for the FexOy/SBA-15 samples. The Kissinger method yielded the lowest apparent activation energy for the lowest loaded citrate sample (Ea ≈ 39 kJ/mol). Conversely, the lowest loaded nitrate sample possessed the highest apparent activation energy (Ea ≈ 88 kJ/mol). For samples obtained from Fe(III)-nitrate precursors, Ea decreased with increasing iron loading. Apparent activation energies from model-independent analysis methods agreed well with those from model-dependent methods. Nucleation as rate-determining step in the reduction of the iron oxide species was consistent with the Mampel solid-state reaction model. PMID:29230346
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
Quality and strength of patient safety climate on medical-surgical units.
Hughes, Linda C; Chang, Yunkyung; Mark, Barbara A
2009-01-01
Describing the safety climate in hospitals is an important first step in creating work environments where safety is a priority. Yet, little is known about the patient safety climate on medical-surgical units. Study purposes were to describe quality and strength of the patient safety climate on medical-surgical units and explore hospital and unit characteristics associated with this climate. Data came from a larger organizational study to investigate hospital and unit characteristics associated with organizational, nurse, and patient outcomes. The sample for this study was 3,689 RNs on 286 medical-surgical units in 146 hospitals. Nursing workgroup and managerial commitment to safety were the two most strongly positive attributes of the patient safety climate. However, issues surrounding the balance between job duties and safety compliance and nurses' reluctance to reveal errors continue to be problematic. Nurses in Magnet hospitals were more likely to communicate about errors and participate in error-related problem solving. Nurses on smaller units and units with lower work complexity reported greater safety compliance and were more likely to communicate about and reveal errors. Nurses on smaller units also reported greater commitment to patient safety and participation in error-related problem solving. Nursing workgroup commitment to safety is a valuable resource that can be leveraged to promote a sense of personal responsibility for and shared ownership of patient safety. Managers can capitalize on this commitment by promoting a work environment in which control over nursing practice and active participation in unit decisions are encouraged and by developing channels of communication that increase staff nurse involvement in identifying patient safety issues, prioritizing unit-level safety goals, and resolving day-to-day operational problems the have the potential to jeopardize patient safety.
NASA Astrophysics Data System (ADS)
Zhang, Y.
2017-12-01
The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Micro-mass standards to calibrate the sensitivity of mass comparators
NASA Astrophysics Data System (ADS)
Madec, Tanguy; Mann, Gaëlle; Meury, Paul-André; Rabault, Thierry
2007-10-01
In mass metrology, the standards currently used are calibrated by a chain of comparisons, performed using mass comparators, that extends ultimately from the international prototype (which is the definition of the unit of mass) to the standards in routine use. The differences measured in the course of these comparisons become smaller and smaller as the standards approach the definitions of their units, precisely because of how accurately they have been adjusted. One source of uncertainty in the determination of the difference of mass between the mass compared and the reference mass is the sensitivity error of the comparator used. Unfortunately, in the market there are no mass standards small enough (of the order of a few hundreds of micrograms) for a valid evaluation of this source of uncertainty. The users of these comparators therefore have no choice but to rely on the characteristics claimed by the makers of the comparators, or else to determine this sensitivity error at higher values (at least 1 mg) and interpolate from this result to smaller differences of mass. For this reason, the LNE decided to produce and calibrate micro-mass standards having nominal values between 100 µg and 900 µg. These standards were developed, then tested in multiple comparisons on an A5 type automatic comparator. They have since been qualified and calibrated in a weighing design, repeatedly and over an extended period of time, to establish their stability with respect to oxidation and the harmlessness of the handling and storage procedure associated with their use. Finally, the micro-standards so qualified were used to characterize the sensitivity errors of two of the LNE's mass comparators, including the one used to tie France's Platinum reference standard (Pt 35) to stainless steel and superalloy standards.
17 CFR 240.12b-2 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... price and number of shares sold. (iii) Once an issuer fails to qualify for smaller reporting company... deficiency, or a combination of deficiencies, in internal control over financial reporting such that there is... control over financial reporting that is less severe than a material weakness, yet important enough to...
40 CFR 747.115 - Mixed mono and diamides of an organic acid.
Code of Federal Regulations, 2010 CFR
2010-07-01
... warning statement shall be no smaller than six point type. All required label text shall be of sufficient..., commerce, importer, impurity, Inventory, manufacturer, person, process, processor, and small quantities... control of the processor. (ii) Distribution in commerce is limited to purposes of export. (iii) The...
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
Acciarri, R.; Adams, C.; Asaadi, J.; ...
2017-03-09
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Error suppression via complementary gauge choices in Reed-Muller codes
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Jochym-O'Connor, Tomas
2017-09-01
Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.
Kim, Matthew H; Marulis, Loren M; Grammer, Jennie K; Morrison, Frederick J; Gehring, William J
2017-03-01
Motivational beliefs and values influence how children approach challenging activities. The current study explored motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two event-related potential (ERP) components: the error-related negativity (ERN) and the error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 4- to 6-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, whereas stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; Asaadi, J.
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
The possible existence of Pop III NS-BH binary and its detectability
NASA Astrophysics Data System (ADS)
Kinugawa, Tomoya; Nakamura, Takashi; Nakano, Hiroyuki
2017-02-01
In the population synthesis simulations of Pop III stars, many BH (black hole)-BH binaries with merger time less than the age of the Universe (τH) are formed, while NS (neutron star)-BH binaries are not. The reason is that Pop III stars have no metal so that no mass loss is expected. Then, in the final supernova explosion to NS, much mass is lost so that the semimajor axis becomes too large for Pop III NS-BH binaries to merge within τH . However it is almost established that the kick velocity of the order of 200 ‑500 km s‑1 exists for NS from the observation of the proper motion of the pulsar. Therefore, the semimajor axis of the half of NS-BH binaries can be smaller than that of the previous argument for Pop III NS-BH binaries to decrease the merging time. We perform population synthesis Monte Carlo simulations of Pop III NS-BH binaries including the kick of NS and find that the event rate of Pop III NS-BH merger rate is 1 Gpc‑3 yr‑1 . This suggests that there is a good chance of detecting Pop III NS-BH mergers in O2 (Observation run 2) of Advanced LIGO and Advanced Virgo from this autumn.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
Error-related brain activity and error awareness in an error classification paradigm.
Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E
2016-10-01
Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.
The Infrared Hubble Diagram of Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Krisciunas, Kevin
Photometry of Type Ia supernovae reveals that these objects are standardizable candles in optical passbands - the peak luminosities are related to the rate of decline after maximum light. In the near-infrared bands, there is essentially a characteristic brightness at maximum light for each photometric band. Thus, in the near-infrared they are better than standardizable candles; they are essentially standard candles. Their absolute magnitudes are known to ±0.15 magnitude or better. The infrared observations have the extra advantage that interstellar extinction by dust along the line of sight is a factor of 3-10 smaller than in the optical B- and V -bands. The size of any systematic errors in the infrared extinction corrections typically become smaller than the photometric errors of the observations. Thus, we can obtain distances to the hosts of Type Ia supernovae to ±8 % or better. This is particularly useful for extragalactic astronomy and precise measurements of the dark energy component of the universe.
Acharya, Ashith B
2014-05-01
Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.
Rogalsky, Corianne
2009-01-01
Numerous studies have identified an anterior temporal lobe (ATL) region that responds preferentially to sentence-level stimuli. It is unclear, however, whether this activity reflects a response to syntactic computations or some form of semantic integration. This distinction is difficult to investigate with the stimulus manipulations and anomaly detection paradigms traditionally implemented. The present functional magnetic resonance imaging study addresses this question via a selective attention paradigm. Subjects monitored for occasional semantic anomalies or occasional syntactic errors, thus directing their attention to semantic integration, or syntactic properties of the sentences. The hemodynamic response in the sentence-selective ATL region (defined with a localizer scan) was examined during anomaly/error-free sentences only, to avoid confounds due to error detection. The majority of the sentence-specific region of interest was equally modulated by attention to syntactic or compositional semantic features, whereas a smaller subregion was only modulated by the semantic task. We suggest that the sentence-specific ATL region is sensitive to both syntactic and integrative semantic functions during sentence processing, with a smaller portion of this area preferentially involved in the later. This study also suggests that selective attention paradigms may be effective tools to investigate the functional diversity of networks involved in sentence processing. PMID:18669589
The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.
NASA Technical Reports Server (NTRS)
Russell, J. M., III; Drayson, S. R.
1972-01-01
Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.
NASA Astrophysics Data System (ADS)
Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas
2017-10-01
In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.
Previsional space during direct laryngoscopy: Implication in the difficult laryngoscopy.
Park, Seongjoo; Han, Ji-Won; Cha, Sukwon; Han, Sung-Hee; Kim, Jin-Hee
2017-07-01
The laryngoscope should displace oral soft tissues forward out of the operator's vision. Therefore, the space in front of the view may be critical for determining the laryngoscopic view. The aim was to investigate the difference in the previsional space during difficult versus easy laryngoscopy (EL).Under general anesthesia, digital photographs of the lateral view of the head and neck were taken in the horizontal sniffing position, after head extension, and during laryngoscopy with a defined force (50 N). Three points (thyroid notch (T), maxillary incisor (I), and mandibular mentum (M)) were marked on the photograph. The previsional space was defined as the TIM triangle. We compared these areas and other variables of the TIM triangle between male patients with difficult laryngoscopy (DL: Cormack-Lehane III-IV, n = 12) versus those of age- and body mass index-matched male patients with EL (Cormack-Lehane I-II, n = 12).When the head was extended, the areas TIM triangle in DL were significantly smaller than in EL. During laryngoscopy, all values of the TIM triangle in DL, including the TIM area (16.4 ± 3.7 vs 22.6 ± 2.8 cm, P < .01), were significantly smaller than the values in EL.The previsional space was smaller in patients with DL than in those with EL. The TIM triangle could suggest new way to explain the mechanism underlying DL.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
NASA Technical Reports Server (NTRS)
Hurley, K.; Briggs, M.; Connaughton, V.; Meegan, C.; von Kienlin, A.; Rau, A.; Zhang, X.; Golenetskii, S.; Aptekar, R.; Mazets, E.;
2012-01-01
In the first two years of operation of the Fermi GBM, the 9-spacecraft Interplanetary Network (IPN) detected 158 GBM bursts with one or two distant spacecraft, and triangulated them to annuli or error boxes. Combining the IPN and GBM localizations leads to error boxes which are up to 4 orders of magnitude smaller than those of the GBM alone. These localizations comprise the IPN supplement to the GBM catalog, and they support a wide range of scientific investigations.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2012 CFR
2012-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of...
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2014 CFR
2014-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of...
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2013 CFR
2013-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of...
Isothermal chemical denaturation of large proteins: Path-dependence and irreversibility.
Wafer, Lucas; Kloczewiak, Marek; Polleck, Sharon M; Luo, Yin
2017-12-15
State functions (e.g., ΔG) are path independent and quantitatively describe the equilibrium states of a thermodynamic system. Isothermal chemical denaturation (ICD) is often used to extrapolate state function parameters for protein unfolding in native buffer conditions. The approach is prudent when the unfolding/refolding processes are path independent and reversible, but may lead to erroneous results if the processes are not reversible. The reversibility was demonstrated in several early studies for smaller proteins, but was assumed in some reports for large proteins with complex structures. In this work, the unfolding/refolding of several proteins were systematically studied using an automated ICD instrument. It is shown that: (i) the apparent unfolding mechanism and conformational stability of large proteins can be denaturant-dependent, (ii) equilibration times for large proteins are non-trivial and may introduce significant error into calculations of ΔG, (iii) fluorescence emission spectroscopy may not correspond to other methods, such as circular dichroism, when used to measure protein unfolding, and (iv) irreversible unfolding and hysteresis can occur in the absence of aggregation. These results suggest that thorough confirmation of the state functions by, for example, performing refolding experiments or using additional denaturants, is needed when quantitatively studying the thermodynamics of protein unfolding using ICD. Copyright © 2017 Elsevier Inc. All rights reserved.
Mazlomi, Adel; Golbabaei, Farideh; Farhang Dehghan, Somayeh; Abbasinia, Marzieh; Mahmoud Khani, Somayeh; Ansari, Mohammad; Hosseini, Mostafa
2017-09-01
This article aimed to investigate the effect of heat stress on cognitive performance and the blood concentration of stress hormones among workers of a foundry plant. Seventy workers within the exposed (35 people) and unexposed (35 people) groups were studied. The wet bulb globe temperature (WBGT) index was measured for heat stress assessment. The cognitive performance tests were conducted using the Stroop color word test (SCWT) before and during working hours. For the assessment of the serum level of cortisol and the plasma level of adrenaline and noradrenaline, blood samples were taken during working hours from both groups. Only for SCWT III was there a significant relationship between heat stress and test duration, error rate and reaction time. The laboratory test results revealed significantly higher concentrations of cortisol, adrenaline and noradrenaline in the exposed subjects than in the unexposed group. There existed a positive correlation between cortisol, adrenaline, noradrenaline and WBGT index and also test duration and reaction time of SCWT III, and number of errors of SCWT I, SCWT II and SCWT III during work. Heat stress can lead to an increase in the blood level of stress hormones, resulting in cognitive performance impairment.
Electrodeposition of Al-Ta alloys in NaCl-KCl-AlCl3 molten salt containing TaCl5
NASA Astrophysics Data System (ADS)
Sato, Kazuki; Matsushima, Hisayoshi; Ueda, Mikito
2016-12-01
To form Al-Ta alloys for high temperature oxidation resistance components, molten salt electrolysis was carried out in an AlCl3-NaCl-KCl melt containing TaCl5 at 423 K. The voltammogram showed two cathodic waves at 0.45 V and 0.7 V vs. Al/Al(III), which may correspond to reduction from Ta(V) to Ta(III) and from Ta(III) to tantalum metal, respectively. Electrodeposits of Al and Ta were obtained in the range from -0.05 to 0.3 V and the highest concentration of Ta in the electrodeposit was 72 at% at 0.3 V. With increasing Ta content in the alloy, the morphology of the electrodeposits became powdery and the particle size smaller.
Bipolar doping and band-gap anomalies in delafossite transparent conductive oxides.
Nie, Xiliang; Wei, Su-Huai; Zhang, S B
2002-02-11
Doping wide-gap materials p type is highly desirable but often difficult. This makes the recent discovery of p-type delafossite oxides, CuM(III)O2, very attractive. The CuM(III)O2 also show unique and unexplained physical properties: Increasing band gap from M(III) = Al,Ga, to In, not seen in conventional semiconductors. The largest gap CuInO2 can be mysteriously doped both n and p type but not the smaller gaps CuAlO2 and CuGaO2. Here, we show that both properties are results of a large disparity between the fundamental gap and the apparent optical gap, a finding that could lead to a breakthrough in the study of bipolarly dopable wide-gap semiconductor oxides.
5 CFR 1605.11 - Makeup of missed or insufficient contributions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... associated breakage to the participant's account in accordance with § 1605.2. (c) Employee makeup... employing agency acknowledges that an error has occurred which has caused a smaller amount of employee... establish a schedule to make up the deficient contributions through future payroll deductions. Employee...
Performance improvement of robots using a learning control scheme
NASA Technical Reports Server (NTRS)
Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.
1987-01-01
Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.
NASA Astrophysics Data System (ADS)
Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter
2018-03-01
An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.
Error of the slanted edge method for measuring the modulation transfer function of imaging systems.
Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu
2018-03-01
The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.
Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements
NASA Technical Reports Server (NTRS)
Torres, O.; Bhartia, P. K.
1998-01-01
The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert, Jr.
1999-01-01
In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Extreme wind-wave modeling and analysis in the south Atlantic ocean
NASA Astrophysics Data System (ADS)
Campos, R. M.; Alves, J. H. G. M.; Guedes Soares, C.; Guimaraes, L. G.; Parente, C. E.
2018-04-01
A set of wave hindcasts is constructed using two different types of wind calibration, followed by an additional test retuning the input source term Sin in the wave model. The goal is to improve the simulation in extreme wave events in the South Atlantic Ocean without compromising average conditions. Wind fields are based on Climate Forecast System Reanalysis (CFSR/NCEP). The first wind calibration applies a simple linear regression model, with coefficients obtained from the comparison of CFSR against buoy data. The second is a method where deficiencies of the CFSR associated with severe sea state events are remedied, whereby "defective" winds are replaced with satellite data within cyclones. A total of six wind datasets forced WAVEWATCH-III and additional three tests with modified Sin in WAVEWATCH III lead to a total of nine wave hindcasts that are evaluated against satellite and buoy data for ambient and extreme conditions. The target variable considered is the significant wave height (Hs). The increase of sea-state severity shows a progressive increase of the hindcast underestimation which could be calculated as a function of percentiles. The wind calibration using a linear regression function shows similar results to the adjustments to Sin term (increase of βmax parameter) in WAVEWATCH-III - it effectively reduces the average bias of Hs but cannot avoid the increase of errors with percentiles. The use of blended scatterometer winds within cyclones could reduce the increasing wave hindcast errors mainly above the 93rd percentile and leads to a better representation of Hs at the peak of the storms. The combination of linear regression calibration of non-cyclonic winds with scatterometer winds within the cyclones generated a wave hindcast with small errors from calm to extreme conditions. This approach led to a reduction of the percentage error of Hs from 14% to less than 8% for extreme waves, while also improving the RMSE.
Processing and error compensation of diffractive optical element
NASA Astrophysics Data System (ADS)
Zhang, Yunlong; Wang, Zhibin; Zhang, Feng; Qin, Hui; Li, Junqi; Mai, Yuying
2014-09-01
Diffractive optical element (DOE) shows high diffraction efficiency and good dispersion performance, which makes the optical system becoming light-weight and more miniature. In this paper, the design, processing, testing, compensation of DOE are discussed, especially the analyzing of compensation technology which based on the analyzing the DOE measurement date from Taylor Hobson PGI 1250. In this method, the relationship between shadowing effect with diamond tool and processing accuracy are analyzed. According to verification processing on the Taylor Hobson NANOFORM 250 lathe, the results indicate that the PV reaches 0.539 micron, the surface roughness reaches 4nm, the step position error is smaller than λ /10 and the step height error is less than 0.23 micron after compensation processing one time.
Fusion Protein Vaccines Targeting Two Tumor Antigens Generate Synergistic Anti-Tumor Effects
Cheng, Wen-Fang; Chang, Ming-Cheng; Sun, Wei-Zen; Jen, Yu-Wei; Liao, Chao-Wei; Chen, Yun-Yuan; Chen, Chi-An
2013-01-01
Introduction Human papillomavirus (HPV) has been consistently implicated in causing several kinds of malignancies, and two HPV oncogenes, E6 and E7, represent two potential target antigens for cancer vaccines. We developed two fusion protein vaccines, PE(ΔIII)/E6 and PE(ΔIII)/E7 by targeting these two tumor antigens to test whether a combination of two fusion proteins can generate more potent anti-tumor effects than a single fusion protein. Materials and Methods In vivo antitumor effects including preventive, therapeutic, and antibody depletion experiments were performed. In vitro assays including intracellular cytokine staining and ELISA for Ab responses were also performed. Results PE(ΔIII)/E6+PE(ΔIII)/E7 generated both stronger E6 and E7-specific immunity. Only 60% of the tumor protective effect was observed in the PE(ΔIII)/E6 group compared to 100% in the PE(ΔIII)/E7 and PE(ΔIII)/E6+PE(ΔIII)/E7 groups. Mice vaccinated with the PE(ΔIII)/E6+PE(ΔIII)/E7 fusion proteins had a smaller subcutaneous tumor size than those vaccinated with PE(ΔIII)/E6 or PE(ΔIII)/E7 fusion proteins alone. Conclusion Fusion protein vaccines targeting both E6 and E7 tumor antigens generated more potent immunotherapeutic effects than E6 or E7 tumor antigens alone. This novel strategy of targeting two tumor antigens together can promote the development of cancer vaccines and immunotherapy in HPV-related malignancies. PMID:24058440
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2013-01-01
Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315
Visual error augmentation enhances learning in three dimensions.
Sharp, Ian; Huang, Felix; Patton, James
2011-09-02
Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Medicine and aviation: a review of the comparison.
Randell, R
2003-01-01
This paper aims to understand the nature of medical error in highly technological environments and argues that a comparison with aviation can blur its real understanding. This study is a comparative study between the notion of error in health care and aviation based on the author's own ethnographic study in intensive care units and findings from the research literature on errors in aviation. Failures in the use of medical technology are common. In attempts to understand the area of medical error, much attention has focused on how we can learn from aviation. This paper argues that such a comparison is not always useful, on the basis that (i) the type of work and technology is very different in the two domains; (ii) different issues are involved in training and procurement; and (iii) attitudes to error vary between the domains. Therefore, it is necessary to look closely at the subject of medical error and resolve those questions left unanswered by the lessons of aviation.
17 CFR 230.405 - Definitions of terms.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the estimated public offering price of the shares; or (3) In the case of an issuer whose public float... shares sold. (iii) Once an issuer fails to qualify for smaller reporting company status, it will remain... indebtedness, the number of shares if relating to shares, and the number of units if relating to any other kind...
Calibration of a stack of NaI scintillators at the Berkeley Bevalac
NASA Technical Reports Server (NTRS)
Schindler, S. M.; Buffington, A.; Lau, K.; Rasmussen, I. L.
1983-01-01
An analysis of the carbon and argon data reveals that essentially all of the charge-changing fragmentation reactions within the stack can be identified and removed by imposing the simple criteria relating the observed energy deposition profiles to the expected Bragg curve depositions. It is noted that these criteria are even capable of identifying approximately one-third of the expected neutron-stripping interactions, which in these cases have anomalous deposition profiles. The contribution of mass error from uncertainty in delta E has an upper limit of 0.25 percent for Mn; this produces an associated mass error for the experiment of about 0.14 amu. It is believed that this uncertainty will change little with changing gamma. Residual errors in the mapping produce even smaller mass errors for lighter isotopes, whereas photoelectron fluctuations and delta-ray effects are approximately the same independent of the charge and energy deposition.
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Tropical forecasting - Predictability perspective
NASA Technical Reports Server (NTRS)
Shukla, J.
1989-01-01
Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.
Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki
2014-11-01
Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
Checa, Purificación; Castellanos, M C; Abundis-Gutiérrez, Alicia; Rosario Rueda, M
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4-6, 7-9, and 10-13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation.
Checa, Purificación; Castellanos, M. C.; Abundis-Gutiérrez, Alicia; Rosario Rueda, M.
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4–6, 7–9, and 10–13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation. PMID:24795676
Amuse, M A; Kuchekar, S R; Mote, N A; Chavan, M B
1985-10-01
Tervalent gold was determined spectrophotometrically as its anionic 1:4 gold-thiol complex extracted into chloroform from aqueous acidic medium (1.5M sulphuric acid) in the presence of tri-iso-octylamine. The complex exhibits maximum absorption at 480 nm (molar absorptivity 4.60 x 10(3) l.mole(-1).cm(-1)) and Beer's law is obeyed in the concentration range 5-50 microg of gold(III) per ml. The relative standard deviation and relative error, calculated from ten determinations of solutions containing 15 microg of gold(III) per ml were 1.0% and 0.8%. The method is simple, selective and reproducible. It permits separation of gold(III) from associated elements and its determination in synthetic mixtures.
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
How much incisor decompensation is achieved prior to orthognathic surgery?
McNeil, Calum; McIntyre, Grant T; Laverick, Sean
2014-07-01
To quantify incisor decompensation in preparation for orthognathic surgery. Pre-treatment and pre-surgery lateral cephalograms for 86 patients who had combined orthodontic and orthognathic treatment were digitised using OPAL 2.1 [http://www.opalimage.co.uk]. To assess intra-observer reproducibility, 25 images were re-digitised one month later. Random and systematic error were assessed using the Dahlberg formula and a two-sample t-test, respectively. Differences in the proportions of cases where the maxillary (1100 +/- 60) or mandibular (900 +/- 60) incisors were fully decomensated were assessed using a Chi-square test (p<0.05). Mann-Whitney U tests were used to identify if there were any differences in the amount of net decompensation for maxillary and mandibular incisors between the Class II combined and Class III groups (p<0.05). Random and systematic error were less than 0.5 degrees and p<0.05, respectively. A greater proportion of cases had decompensated mandibular incisors (80%) than maxillary incisors (62%) and this difference was statistically significant (p=0.029). The amount of maxillary incisor decompensation in the Class II and Class III groups did not statistically differ (p=0.45) whereas the mandibular incisors in the Class III group underwent statistically significantly greater decompensation (p=0.02). Mandibular incisors were decompensated for a greater proportion of cases than maxillary incisors in preparation for orthognathic surgery. There was no difference in the amount of maxillary incisor decompensation between Class II and Class III cases. There was a greater net decompensation for mandibular incisors in Class III cases when compared to Class II cases. Key words:Decompensation, orthognathic, pre-surgical orthodontics, surgical-orthodontic.
Flight test evaluation of the E-systems Differential GPS category 3 automatic landing system
NASA Technical Reports Server (NTRS)
Kaufmann, David N.; Mcnally, B. David
1995-01-01
Test flights were conducted to evaluate the capability of Differential Global Positioning System (DGPS) to provide the accuracy and integrity required for International Civil Aviation Organization (ICAO) Category (CAT) III precision approach and landings. These test flights were part of a Federal Aviation Administration (FAA) program to evaluate the technical feasibility of using DGPS based technology for CAT III precision approach and landing applications. An IAI Westwind 1124 aircraft (N24RH) was equipped with DGPS receiving equipment and additional computing capability provided by E-Systems. The test flights were conducted at NASA Ames Research Center's Crows Landing Flight Facility, Crows Landing, California. The flight test evaluation was based on completing 100 approaches and landings. The navigation sensor error accuracy requirements were based on ICAO requirements for the Microwave Landing System (MLS). All of the approaches and landings were evaluated against ground truth reference data provided by a laser tracker. Analysis of these approaches and landings shows that the E-Systems DGPS system met the navigation sensor error requirements for a successful approach and landing 98 out of 100 approaches and landings, based on the requirements specified in the FAA CAT III Level 2 Flight Test Plan. In addition, the E-Systems DGPS system met the integrity requirements for a successful approach and landing or stationary trial for all 100 approaches and landings and all ten stationary trials, based on the requirements specified in the FAA CAT III Level 2 Flight Test Plan.
49 CFR 193.2509 - Emergency procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... plant; (ii) Potential hazards at the plant, including fires; (iii) Communication and emergency control... plant due to operating malfunctions, structural collapse, personnel error, forces of nature, and activities adjacent to the plant. (b) To adequately handle each type of emergency identified under paragraph...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2013 CFR
2013-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2014 CFR
2014-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2012 CFR
2012-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
Horowitz-Kraus, Tzipi
2016-05-01
The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Evaluation and error apportionment of an ensemble of ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalirai, Jason S.; Beaton, Rachael L.; Majewski, Steven R.
2010-03-10
We present new Keck/DEIMOS spectroscopic observations of hundreds of individual stars along the sightline to the first three of the Andromeda (M31) dwarf spheroidal (dSph) galaxies to be discovered, And I, II, and III, and combine them with recent spectroscopic studies by our team of three additional M31 dSphs, And VII, X, and XIV, as a part of the SPLASH Survey (Spectroscopic and Photometric Landscape of Andromeda's Stellar Halo). Member stars of each dSph are isolated from foreground Milky Way dwarf stars and M31 field contamination using a variety of photometric and spectroscopic diagnostics. Our final spectroscopic sample of membermore » stars in each dSph, for which we measure accurate radial velocities with a median uncertainty (random plus systematic errors) of 4-5 km s{sup -1}, includes 80 red giants in And I, 95 in And II, 43 in And III, 18 in And VII, 22 in And X, and 38 in And XIV. The sample of confirmed members in the six dSphs is used to derive each system's mean radial velocity, intrinsic central velocity dispersion, mean abundance, abundance spread, and dynamical mass. This combined data set presents us with a unique opportunity to perform the first systematic comparison of the global properties (e.g., metallicities, sizes, and dark matter masses) of one-third of Andromeda's total known dSph population with Milky Way counterparts of the same luminosity. Our overall comparisons indicate that the family of dSphs in these two hosts have both similarities and differences. For example, we find that the luminosity-metallicity relation is very similar between L {approx} 10{sup 5} and 10{sup 7} L{sub sun}, suggesting that the chemical evolution histories of each group of dSphs are similar. The lowest luminosity M31 dSphs appear to deviate from the relation, possibly suggesting tidal stripping. Previous observations have noted that the sizes of M31's brightest dSphs are systematically larger than Milky Way satellites of similar luminosity. At lower luminosities between L = 10{sup 4} and 10{sup 6} L{sub sun}, we find that the sizes of dSphs in the two hosts significantly overlap and that four of the faintest M31 dSphs are smaller than Milky Way counterparts. The first dynamical mass measurements of six M31 dSphs over a large range in luminosity indicate similar mass-to-light ratios compared to Milky Way dSphs among the brighter satellites, and smaller mass-to-light ratios among the fainter satellites. Combined with their similar or larger sizes at these luminosities, these results hint that the M31 dSphs are systematically less dense than Milky Way dSphs. The implications of these similarities and differences for general understanding of galaxy formation and evolution are summarized.« less
NASA Astrophysics Data System (ADS)
Nasyrov, R. K.; Poleshchuk, A. G.
2017-09-01
This paper describes the development and manufacture of diffraction corrector and imitator for the interferometric control of the surface shape of the 6-m main mirror of the Big Azimuthal Telescope of the Russian Academy of Sciences. The effect of errors in manufacture and adjustment on the quality of the measurement wavefront is studied. The corrector is controlled with the use of an off-axis diffraction imitator operating in a reflection mode. The measured error is smaller than 0.0138λ (RMS).
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Comparison of Asymmetric and Ice-cream Cone Models for Halo Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Na, H.; Moon, Y.
2011-12-01
Halo coronal mass ejections (HCMEs) are major cause of the geomagnetic storms. To minimize the projection effect by coronagraph observation, several cone models have been suggested: an ice-cream cone model, an asymmetric cone model etc. These models allow us to determine the three dimensional parameters of HCMEs such as radial speed, angular width, and the angle between sky plane and central axis of the cone. In this study, we compare these parameters obtained from different models using 48 well-observed HCMEs from 2001 to 2002. And we obtain the root mean square error (RMS error) between measured projection speeds and calculated projection speeds for both cone models. As a result, we find that the radial speeds obtained from the models are well correlated with each other (R = 0.86), and the correlation coefficient of angular width is 0.6. The correlation coefficient of the angle between sky plane and central axis of the cone is 0.31, which is much smaller than expected. The reason may be due to the fact that the source locations of the asymmetric cone model are distributed near the center, while those of the ice-cream cone model are located in a wide range. The average RMS error of the asymmetric cone model (85.6km/s) is slightly smaller than that of the ice-cream cone model (87.8km/s).
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
On the relationship between aerosol content and errors in telephotometer experiments.
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.
1971-01-01
This paper presents an invariant imbedding theory of multiple scattering phenomena contributing to errors in telephotometer experiments. The theory indicates that there is a simple relationship between the magnitudes of the errors introduced by successive orders of scattering and it is shown that for all optical thicknesses each order can be represented by a coefficient which depends on the field of view of the telescope and the properties of the scattering medium. The verification of the theory and the derivation of the coefficients have been accomplished by a Monte Carlo program. Both monodisperse and polydisperse systems of Mie scatterers have been treated. The results demonstrate that for a given optical thickness the coefficients increase strongly with the mean particle size particularly for the smaller fields of view.
III, ERes, and Ares--A Reserves Comparison
ERIC Educational Resources Information Center
Power, June L.
2011-01-01
Founded in 1887 as the Croatan Normal School, the University of North Carolina at Pembroke (UNCP) is a smaller branch of the UNC System (with an full-time enrollment of about 6,500), yet with awarded diversity and a historical dedication to individualized service. Its dedication to services drives its approach to course reserves, which is to…
Tile Patterns with LOGO--Part III: Tile Patterns from Mult Tiles Using Logo.
ERIC Educational Resources Information Center
Clason, Robert G.
1991-01-01
A mult tile is a set of polygons each of which can be dissected into smaller polygons similar to the original set of polygons. Using a recursive LOGO method that requires solutions to various geometry and trigonometry problems, dissections of mult tiles are carried out repeatedly to produce tile patterns. (MDH)
Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ming; Cygler,
The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
NASA Technical Reports Server (NTRS)
Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.
2015-01-01
A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures. Astronauts are notably adept high performers. Most failures are recorded only when multiple, small errors occur and humans are unable to recognize and correct or compensate for these errors in time to prevent a failure (Dismukes, Berman, Loukopoulos, 2007) (Category III). More commonly, observers record variability in levels of performance. Some teams commit no observable errors but fail to achieve performance objectives or perform only adequately, while other teams commit some errors but perform spectacularly. Successful performance, therefore, cannot be viewed as simply the absence of errors or the avoidance of failure Johnson Space Center (JSC) Joint Leadership Team, 2008). While failure is commonly attributed to making a major error, focusing solely on the elimination of error(s) does not significantly reduce the risk of failure. Failure may also occur when performance is simply insufficient or an effort is incapable of adjusting sufficiently to a contextual change (e.g., changing levels of autonomy).
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.
Kazakov, Igor V; Bodensteiner, Michael; Timoshkin, Alexey Y
2014-03-01
The molecular structures of trichlorido(2,2':6',2''-terpyridine-κ(3)N,N',N'')gallium(III), [GaCl3(C15H11N3)], and tribromido(2,2':6',2''-terpyridine-κ(3)N,N',N'')gallium(III), [GaBr3(C15H11N3)], are isostructural, with the Ga(III) atom displaying an octahedral geometry. It is shown that the Ga-N distances in the two complexes are the same within experimental error, in contrast to expected bond lengthening in the bromide complex due to the lower Lewis acidity of GaBr3. Thus, masking of the Lewis acidity trends in the solid state is observed not only for complexes of group 13 metal halides with monodentate ligands but for complexes with the polydentate 2,2':6',2''-terpyridine donor as well.
Expression-invariant representations of faces.
Bronstein, Alexander M; Bronstein, Michael M; Kimmel, Ron
2007-01-01
Addressed here is the problem of constructing and analyzing expression-invariant representations of human faces. We demonstrate and justify experimentally a simple geometric model that allows to describe facial expressions as isometric deformations of the facial surface. The main step in the construction of expression-invariant representation of a face involves embedding of the facial intrinsic geometric structure into some low-dimensional space. We study the influence of the embedding space geometry and dimensionality choice on the representation accuracy and argue that compared to its Euclidean counterpart, spherical embedding leads to notably smaller metric distortions. We experimentally support our claim showing that a smaller embedding error leads to better recognition.
Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari
2013-10-01
Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
Proton irradiation effects on advanced digital and microwave III-V components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.
1994-09-01
A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10{sup 10} to 2 {times} 10{sup 14} protons/cm{sup 2}. Large soft-error rates were measured for digital GaAs MESFET (3 {times} 10{sup {minus}5} errors/bit-day) and heterojunction bipolar circuits (10{sup {minus}5} errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-{mu}m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10{sup 14} protons/cm{sup 2} [equivalent to total doses in excess of 10 Mrad(GaAs)].« less
Proton irradiation effects on advanced digital and microwave III-V components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.
1994-12-01
A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10[sup 10] to 2 [times] 10[sup 14] protons/cm[sup 2]. Large soft-error rates were measured for digital GaAs MESFET (3 [times] 10[sup [minus]5] errors/bit-day) and heterojunction bipolar circuits (10[sup [minus]5] errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-[mu]m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10[sup 14] protons/cm[sup 2] [equivalent to total doses in excess of 10 Mrad (GaAs)].« less
Correction of Quenching Errors in Analytical Fluorimetry through Use of Time Resolution.
1980-05-27
QUENCHING ERRORS IN ANALYTICAL FLUORIMETRY THROUGH USE OF TIME RESOLUTION by Gary M. Hieftje and Gilbert R. Haugen Prepared for Publication in... HIEFTJE , 6 R HAUGEN NOCOIT1-6-0638 UCLASSIFIED TR-25 NL ///I//II IIIII I__I. 111122 Z .. ..12 1.~l8 .2 -4 SECuRITY CLSIIAI1 orTI PAGE MWhno. ee...in Analytical and Clinical Chemistry, vol. 3, D. M. Hercules, G. M. Hieftje , L. R. Snyder, and M4. A. Evenson, eds., Plenum Press, N.Y., 1978, ch. S
Efficient Z gates for quantum computing
NASA Astrophysics Data System (ADS)
McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.
2017-08-01
For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Performance of Bootstrap MCEWMA: Study case of Sukuk Musyarakah data
NASA Astrophysics Data System (ADS)
Safiih, L. Muhamad; Hila, Z. Nurul
2014-07-01
Sukuk Musyarakah is one of several instruments of Islamic bond investment in Malaysia, where the form of this sukuk is actually based on restructuring the conventional bond to become a Syariah compliant bond. The Syariah compliant is based on prohibition of any influence of usury, benefit or fixed return. Despite of prohibition, daily returns of sukuk are non-fixed return and in statistic, the data of sukuk returns are said to be a time series data which is dependent and autocorrelation distributed. This kind of data is a crucial problem whether in statistical and financing field. Returns of sukuk can be statistically viewed by its volatility, whether it has high volatility that describing the dramatically change of price and categorized it as risky bond or else. However, this crucial problem doesn't get serious attention among researcher compared to conventional bond. In this study, MCEWMA chart in Statistical Process Control (SPC) is mainly used to monitor autocorrelated data and its application on daily returns of securities investment data has gained widespread attention among statistician. However, this chart has always been influence by inaccurate estimation, whether on base model or its limit, due to produce large error and high of probability of signalling out-of-control process for false alarm study. To overcome this problem, a bootstrap approach used in this study, by hybridise it on MCEWMA base model to construct a new chart, i.e. Bootstrap MCEWMA (BMCEWMA) chart. The hybrid model, BMCEWMA, will be applied to daily returns of sukuk Musyarakah for Rantau Abang Capital Bhd. The performance of BMCEWMA base model showed that its more effective compare to real model, MCEWMA based on smaller error estimation, shorter the confidence interval and smaller false alarm. In other word, hybrid chart reduce the variability which shown by smaller error and false alarm. It concludes that the application of BMCEWMA is better than MCEWMA.
Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant
2015-01-01
Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029
Brain Potentials Measured During a Go/NoGo Task Predict Completion of Substance Abuse Treatment
Steele, Vaughn R.; Fink, Brandi C.; Maurer, J. Michael; Arbabshirani, Mohammad R.; Wilber, Charles H.; Jaffe, Adam J.; Sidz, Anna; Pearlson, Godfrey D.; Calhoun, Vince D.; Clark, Vincent P.; Kiehl, Kent A.
2014-01-01
Background US nationwide estimates indicate 50–80% of prisoners have a history of substance abuse or dependence. Tailoring substance abuse treatment to specific needs of incarcerated individuals could improve effectiveness of treating substance dependence and preventing drug abuse relapse. The purpose of the present study was to test the hypothesis that pre-treatment neural measures of a Go/NoGo task would predict which individuals would or would not complete a 12-week cognitive behavioral substance abuse treatment program. Methods Adult incarcerated participants (N=89; Females=55) who volunteered for substance abuse treatment performed a response inhibition (Go/NoGo) task while event-related potentials (ERP) were recorded. Stimulus- and response-locked ERPs were compared between individuals who completed (N=68; Females=45) and discontinued (N=21; Females=10) treatment. Results As predicted, stimulus-locked P2, response-locked error-related negativity (ERN/Ne), and response-locked error positivity (Pe), measured with windowed time-domain and principal component analysis, differed between groups. Using logistic regression and support-vector machine (i.e., pattern classifiers) models, P2 and Pe predicted treatment completion above and beyond other measures (i.e., N2, P300, ERN/Ne, age, sex, IQ, impulsivity, and self-reported depression, anxiety, motivation for change, and years of drug abuse). Conclusions We conclude individuals who discontinue treatment exhibited deficiencies in sensory gating, as indexed by smaller P2, error-monitoring, as indexed by smaller ERN/Ne, and adjusting response strategy post-error, as indexed by larger Pe. However, the combination of P2 and Pe reliably predicted 83.33% of individuals who discontinued treatment. These results may help in the development of individualized therapies, which could lead to more favorable, long-term outcomes. PMID:24238783
USDA-ARS?s Scientific Manuscript database
Elementary chemistry errors are contained in: Adsorption sequence of toxic inorganic anions on a soil by S. Saeki published in the Bulletin of Environmental Contamination and Toxicology (2008) 81:508-512. In the article, the author asserts emphatically that he is studying As(V) not As(III) adsorpt...
Biomarker for Glycogen Storage Diseases
2017-07-03
Fructose Metabolism, Inborn Errors; Glycogen Storage Disease; Glycogen Storage Disease Type I; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Glycogen Storage Disease Type VI; Glycogen Storage Disease Type VII; Glycogen Storage Disease Type VIII
Chaos based encryption system for encrypting electroencephalogram signals.
Lin, Chin-Feng; Shih, Shun-Han; Zhu, Jin-De
2014-05-01
In the paper, we use the Microsoft Visual Studio Development Kit and C# programming language to implement a chaos-based electroencephalogram (EEG) encryption system involving three encryption levels. A chaos logic map, initial value, and bifurcation parameter for the map were used to generate Level I chaos-based EEG encryption bit streams. Two encryption-level parameters were added to these elements to generate Level II chaos-based EEG encryption bit streams. An additional chaotic map and chaotic address index assignment process was used to implement the Level III chaos-based EEG encryption system. Eight 16-channel EEG Vue signals were tested using the encryption system. The encryption was the most rapid and robust in the Level III system. The test yielded superior encryption results, and when the correct deciphering parameter was applied, the EEG signals were completely recovered. However, an input parameter error (e.g., a 0.00001 % initial point error) causes chaotic encryption bit streams, preventing the recovery of 16-channel EEG Vue signals.
MarsSedEx III: linking Computational Fluid Dynamics (CFD) and reduced gravity experiments
NASA Astrophysics Data System (ADS)
Kuhn, N. J.; Kuhn, B.; Gartmann, A.
2015-12-01
Nikolaus J. Kuhn (1), Brigitte Kuhn (1), and Andres Gartmann (2) (1) University of Basel, Physical Geography, Environmental Sciences, Basel, Switzerland (nikolaus.kuhn@unibas.ch), (2) Meteorology, Climatology, Remote Sensing, Environmental Sciences, University of Basel, Switzerland Experiments conducted during the MarsSedEx I and II reduced gravity experiments showed that using empirical models for sediment transport on Mars developed for Earth violates fluid dynamics. The error is caused by the interaction between runing water and sediment particles, which affect each other in a positive feedback loop. As a consequence, the actual flow conditions around a particle cannot be represented by drag coefficients derived on Earth. This study exmines the implications of such gravity effects on sediment movement on Mars, with special emphasis on the limits of sandstones and conglomerates formed on Earth as analogues for sedimentation on Mars. Furthermore, options for correctiong the errors using a combination of CFD and recent experiments conducted during the MarsSedEx III campaign are presented.
Fu, Haijin; Wang, Yue; Tan, Jiubin; Fan, Zhigang
2018-01-01
Even after the Heydemann correction, residual nonlinear errors, ranging from hundreds of picometers to several nanometers, are still found in heterodyne laser interferometers. This is a crucial factor impeding the realization of picometer level metrology, but its source and mechanism have barely been investigated. To study this problem, a novel nonlinear model based on optical mixing and coupling with ghost reflection is proposed and then verified by experiments. After intense investigation of this new model’s influence, results indicate that new additional high-order and negative-order nonlinear harmonics, arising from ghost reflection and its coupling with optical mixing, have only a negligible contribution to the overall nonlinear error. In real applications, any effect on the Lissajous trajectory might be invisible due to the small ghost reflectance. However, even a tiny ghost reflection can significantly worsen the effectiveness of the Heydemann correction, or even make this correction completely ineffective, i.e., compensation makes the error larger rather than smaller. Moreover, the residual nonlinear error after correction is dominated only by ghost reflectance. PMID:29498685
NASA Technical Reports Server (NTRS)
Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.
1993-01-01
The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.
C III] Emission in Star-forming Galaxies at z ∼ 1
NASA Astrophysics Data System (ADS)
Du, Xinnan; Shapley, Alice E.; Martin, Crystal L.; Coil, Alison L.
2017-03-01
The C III]λλ1907, 1909 rest-frame UV emission doublet has recently been detected in galaxies during the epoch of reionization (z > 6), with a high equivalent width (EW; 10 Å, rest frame). Currently, it is possible to obtain much more detailed information for star-forming galaxies at significantly lower redshift. Accordingly, studies of their far-UV spectra are useful for understanding the factors modulating the strength of C III] emission. We present the first statistical sample of C III] emission measurements in star-forming galaxies at z ∼ 1. Our sample is drawn from the DEEP2 survey and spans the redshifts 0.64 ≤slant z ≤slant 1.35 (< z> =1.08). We find that the median EW of individual C III] detections in our sample (1.30 Å) is much smaller than the typical value observed thus far at z > 6. Furthermore, out of 184 galaxies with coverage of C III], only 40 have significant detections. Galaxies with individual C III] detections have bluer colors and lower luminosities on average than those without, implying that strong C III] emitters are in general young and low-mass galaxies without significant dust extinction. Using stacked spectra, we further investigate how C III] strength correlates with multiple galaxy properties (M B , U ‑ B, M *, star formation rate, specific star formation rate) and rest-frame near-UV (Fe II* and Mg II) and optical ([O III] and Hβ) emission line strengths. These results provide a detailed picture of the physical environment in star-forming galaxies at z ∼ 1, and motivate future observations of strong C III] emitters at similar redshifts.
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Joo, Yeon Kyoung; Lee-Won, Roselyn J
2016-10-01
For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook
2017-06-20
The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5 mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3 mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.
NASA Technical Reports Server (NTRS)
Danilin, M. Y.; Ko, Malcolm K. W.; Bevilacqua, R. M.; Lyjak, L. V.; Froidevaux, L.; Santee, M. L.; Zawodny, J. M.; Hoppel, K. W.; Richard, E. C.; Spackman, J. R.;
2001-01-01
We compared the version 5 Microwave Limb Sounder (MLS) aboard the Upper Atmosphere Research Satellite (UARS), version 3 Polar Ozone and Aerosol Measurement-III (POAM-111) aboard the French satellite SPOT-IV, version 6.0 Stratospheric Aerosol and Gas Experiment 11 (SAGE-II) aboard the Earth Radiation Budget Satellite, and NASA ER-2 aircraft measurements made in the northern hemisphere in January-February 2000 during the SAGE III Ozone Loss and Validation Experiment (SOLVE). This study addresses one of the key scientific objectives of the SOLVE campaign, namely, to validate multi-platform satellite measurements made in the polar stratosphere during winter. This intercomparison was performed using a traditional correlative analysis (TCA) and a trajectory hunting technique (THT). Launching backward and forward trajectories from the points of measurement, the THT identifies air parcels sampled at least twice within a prescribed match criterion during the course of 5 days. We found that the ozone measurements made by these four instruments agree most of the time within 110% in the stratosphere up to 1400 K (approximately 35 km). The water vapor measurements from POAM-III and the ER-2 Harvard Lyman-alpha hygrometer and JPL laser hygrometer agree to within 10.5 ppmv (or about +/-10%) in the lower stratosphere above 380 K. The MLS and ER-2 ClO measurements agree within their error bars for the TCA. The MLS and ER-2 nitric acid measurements near 17-20 km altitude agree within their uncertainties most of the time with a hint of a positive offset by MLS according to the TCA. We also applied the AER box model constrained by the ER-2 measurements for analysis of the ClO and HN03 measurements using the THT. We found that: (1) the model values of ClO are smaller by about 0.3-0.4 (0.2) ppbv below (above) 400 K than those by MLS and (2) the HN03 comparison shows a positive offset of MLS values by approximately 1 and 1-2 ppbv below 400 K and near 450 K, respectively. It is hard to quantify the HN03 offset in the 400-440 K range because of the high sensitivity of nitric acid to the PSC schemes. Our study shows that, with some limitations (like HN03 comparison under PSC conditions), the THT is a more powerful tool for validation studies than the TCA, making conclusions of the comparison statistically more robust.
Propolis Modifies Collagen Types I and III Accumulation in the Matrix of Burnt Tissue.
Olczyk, Pawel; Wisowski, Grzegorz; Komosinska-Vassev, Katarzyna; Stojko, Jerzy; Klimek, Katarzyna; Olczyk, Monika; Kozma, Ewa M
2013-01-01
Wound healing represents an interactive process which requires highly organized activity of various cells, synthesizing cytokines, growth factors, and collagen. Collagen types I and III, serving as structural and regulatory molecules, play pivotal roles during wound healing. The aim of this study was to compare the propolis and silver sulfadiazine therapeutic efficacy throughout the quantitative and qualitative assessment of collagen types I and III accumulation in the matrix of burnt tissues. Burn wounds were inflicted on pigs, chosen for the evaluation of wound repair because of many similarities between pig and human skin. Isolated collagen types I and III were estimated by the surface plasmon resonance method with a subsequent collagenous quantification using electrophoretic and densitometric analyses. Propolis burn treatment led to enhanced collagens and its components expression, especially during the initial stage of the study. Less expressed changes were observed after silver sulfadiazine (AgSD) application. AgSD and, with a smaller intensity, propolis stimulated accumulation of collagenous degradation products. The assessed propolis therapeutic efficacy, throughout quantitatively and qualitatively analyses of collagen types I and III expression and degradation in wounds matrix, may indicate that apitherapeutic agent can generate favorable biochemical environment supporting reepithelization.
Proxy-equation paradigm: A strategy for massively parallel asynchronous computations
NASA Astrophysics Data System (ADS)
Mittal, Ankita; Girimaji, Sharath
2017-09-01
Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
NASA Astrophysics Data System (ADS)
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.
2008-01-01
Multichannel Analysis of Surface Waves utilizes a multichannel recording system to estimate near-surface shear (S)-wave velocities from high-frequency Rayleigh waves. A pseudo-2D S-wave velocity (vS) section is constructed by aligning 1D models at the midpoint of each receiver spread and using a spatial interpolation scheme. The horizontal resolution of the section is therefore most influenced by the receiver spread length and the source interval. The receiver spread length sets the theoretical lower limit and any vS structure with its lateral dimension smaller than this length will not be properly resolved in the final vS section. A source interval smaller than the spread length will not improve the horizontal resolution because spatial smearing has already been introduced by the receiver spread. In this paper, we first analyze the horizontal resolution of a pair of synthetic traces. Resolution analysis shows that (1) a pair of traces with a smaller receiver spacing achieves higher horizontal resolution of inverted S-wave velocities but results in a larger relative error; (2) the relative error of the phase velocity at a high frequency is smaller than at a low frequency; and (3) a relative error of the inverted S-wave velocity is affected by the signal-to-noise ratio of data. These results provide us with a guideline to balance the trade-off between receiver spacing (horizontal resolution) and accuracy of the inverted S-wave velocity. We then present a scheme to generate a pseudo-2D S-wave velocity section with high horizontal resolution using multichannel records by inverting high-frequency surface-wave dispersion curves calculated through cross-correlation combined with a phase-shift scanning method. This method chooses only a pair of consecutive traces within a shot gather to calculate a dispersion curve. We finally invert surface-wave dispersion curves of synthetic and real-world data. Inversion results of both synthetic and real-world data demonstrate that inverting high-frequency surface-wave dispersion curves - by a pair of traces through cross-correlation with phase-shift scanning method and with the damped least-square method and the singular-value decomposition technique - can feasibly achieve a reliable pseudo-2D S-wave velocity section with relatively high horizontal resolution. ?? 2008 Elsevier B.V. All rights reserved.
Ding, Zicheng; Chen, Bo; Ding, Junqiao; Wang, Lixiang; Han, Yanchun
2014-07-01
Supramolecular metallogels can be gained from the phosphonate substituted 4,4'-bis(N-carbazolyl)biphenyl (PCBP) in the presence of aluminum chloride in alcohols, which can donate oxygen to aid proton transfer in the aluminum organophosphorus complexes. Inside the metallogels, three-dimensional fiber networks with nanofibers entangling and intersecting with each other inside are formed. The nanofibers show layered structures with a period thickness of 0.82 nm. As the content of aluminum(III) increases, the size of the fibers becomes smaller and the fibers pack more densely. It makes the transparent gel become turbid but nevertheless improves the stability of the metallogels. NMR, FT-IR and fluorescence spectroscopy show that the coordination interactions between the phosphonate groups of PCBP molecules and aluminum(III) ions as well as the π-π interactions among PCBP molecules are involved during the gel formation process. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lv, Zun-Ren; Ji, Hai-Ming, E-mail: jhm@semi.ac.cn; Luo, Shuai
Large signal modulation characteristics of the simultaneous ground-state (GS) and excited-state (ES) lasing quantum dot lasers are theoretically investigated. Relaxation oscillations of ‘0 → 1’ and ‘1 → 0’ in the GS lasing region (Region I), the transition region from GS lasing to two-state lasing (Region II) and the two-state lasing region (Region III) are compared and analyzed. It is found that the overshooting power and settling time in both Regions I and III decrease as the bias current increases. However, there exist abnormal behaviors of the overshooting power and settling time in Region II owing to the occurrence ofmore » ES lasing, which lead to fuzzy eye diagrams of the GS and ES lasing. Moreover, the ES lasing in Region III possesses much better eye diagrams because of its shorter settling time and smaller overshooting power over the GS lasing in Region I.« less
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
Katoh, Keiichi; Horii, Yoji; Yasuda, Nobuhiro; Wernsdorfer, Wolfgang; Toriumi, Koshiro; Breedlove, Brian K; Yamashita, Masahiro
2012-11-28
The SMM behaviour of dinuclear Ln(III)-Pc multiple-decker complexes (Ln = Tb(3+) and Dy(3+)) with energy barriers and slow-relaxation behaviour were explained by using X-ray crystallography and static and dynamic susceptibility measurements. In particular, interactions among the 4f electrons of several dinuclear Ln(III)-Pc type SMMs have never been discussed on the basis of the crystal structure. For dinuclear Tb(III)-Pc complexes, a dual magnetic relaxation process was observed. The relaxation processes are due to the anisotropic centres. Our results clearly show that the two Tb(3+) ion sites are equivalent and are consistent with the crystal structure. On the other hand, the mononuclear Tb(III)-Pc complex exhibited only a single magnetic relaxation process. This is clear evidence that the magnetic relaxation mechanism depends heavily on the dipole-dipole (f-f) interactions between the Tb(3+) ions in the dinuclear systems. Furthermore, the SMM behaviour of dinuclear Dy(III)-Pc type SMMs with smaller energy barriers compared with that of Tb(III)-Pc and slow-relaxation behaviour was explained. Dinuclear Dy(III)-Pc SMMs exhibited single-component magnetic relaxation behaviour. The results indicate that the magnetic relaxation properties of dinuclear Ln(III)-Pc multiple-decker complexes are affected by the local molecular symmetry and are extremely sensitive to tiny distortions in the coordination geometry. In other words, the spatial arrangement of the Ln(3+) ions (f-f interactions) in the crystal is important. Our work shows that the SMM properties can be fine-tuned by introducing weak intermolecular magnetic interactions in a controlled SMM spatial arrangement.
Arabidopsis Chloroplast Mini-Ribonuclease III Participates in rRNA Maturation and Intron Recycling
Hotto, Amber M.; Castandet, Benoît; Gilet, Laetitia; Higdon, Andrea; Condon, Ciarán; Stern, David B.
2015-01-01
RNase III proteins recognize double-stranded RNA structures and catalyze endoribonucleolytic cleavages that often regulate gene expression. Here, we characterize the functions of RNC3 and RNC4, two Arabidopsis thaliana chloroplast Mini-RNase III-like enzymes sharing 75% amino acid sequence identity. Whereas rnc3 and rnc4 null mutants have no visible phenotype, rnc3/rnc4 (rnc3/4) double mutants are slightly smaller and chlorotic compared with the wild type. In Bacillus subtilis, the RNase Mini-III is integral to 23S rRNA maturation. In Arabidopsis, we observed imprecise maturation of 23S rRNA in the rnc3/4 double mutant, suggesting that exoribonucleases generated staggered ends in the absence of specific Mini-III-catalyzed cleavages. A similar phenotype was found at the 3′ end of the 16S rRNA, and the primary 4.5S rRNA transcript contained 3′ extensions, suggesting that Mini-III catalyzes several processing events of the polycistronic rRNA precursor. The rnc3/4 mutant showed overaccumulation of a noncoding RNA complementary to the 4.5S-5S rRNA intergenic region, and its presence correlated with that of the extended 4.5S rRNA precursor. Finally, we found rnc3/4-specific intron degradation intermediates that are probable substrates for Mini-III and show that B. subtilis Mini-III is also involved in intron regulation. Overall, this study extends our knowledge of the key role of Mini-III in intron and noncoding RNA regulation and provides important insight into plastid rRNA maturation. PMID:25724636
Opioid errors in inpatient palliative care services: a retrospective review.
Heneka, Nicole; Shaw, Tim; Rowett, Debra; Lapkin, Samuel; Phillips, Jane L
2018-06-01
Opioids are a high-risk medicine frequently used to manage palliative patients' cancer-related pain and other symptoms. Despite the high volume of opioid use in inpatient palliative care services, and the potential for patient harm, few studies have focused on opioid errors in this population. To (i) identify the number of opioid errors reported by inpatient palliative care services, (ii) identify reported opioid error characteristics and (iii) determine the impact of opioid errors on palliative patient outcomes. A 24-month retrospective review of opioid errors reported in three inpatient palliative care services in one Australian state. Of the 55 opioid errors identified, 84% reached the patient. Most errors involved morphine (35%) or hydromorphone (29%). Opioid administration errors accounted for 76% of reported opioid errors, largely due to omitted dose (33%) or wrong dose (24%) errors. Patients were more likely to receive a lower dose of opioid than ordered as a direct result of an opioid error (57%), with errors adversely impacting pain and/or symptom management in 42% of patients. Half (53%) of the affected patients required additional treatment and/or care as a direct consequence of the opioid error. This retrospective review has provided valuable insights into the patterns and impact of opioid errors in inpatient palliative care services. Iatrogenic harm related to opioid underdosing errors contributed to palliative patients' unrelieved pain. Better understanding the factors that contribute to opioid errors and the role of safety culture in the palliative care service context warrants further investigation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
NASA Technical Reports Server (NTRS)
Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.
Bathymetric surveying with GPS and heave, pitch, and roll compensation
Work, P.A.; Hansen, M.; Rogers, W.E.
1998-01-01
Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
Biological Effects of Nonionizing Electromagnetic Radiation. Volume III, Number 3.
1979-03-01
experimental errors psychic healing, dowsing, and telepathy . In addi- inherent in these experiments , there was no dif- tion , tests of human sensitivity... synthetic and naturall y occurring cellular surface area of rat liver cells tha t phospholipid membranes were studied using Reman perturb water suggest
Code of Federal Regulations, 2011 CFR
2011-07-01
... loan defaults as well as from other overpayments of educational assistance benefits) or insurance... services furnished in error (§ 17.101(a) of this chapter). (ii) Debts resulting from services furnished in a medical emergency (§ 17.101(b) of this chapter). (iii) Other claims arising in connection with...
12 CFR 205.11 - Procedures for resolving errors.
Code of Federal Regulations, 2010 CFR
2010-01-01
... institution's findings and shall note the consumer's right to request the documents that the institution... transfer; (ii) An incorrect electronic fund transfer to or from the consumer's account; (iii) The omission... made by the financial institution relating to an electronic fund transfer; (v) The consumer's receipt...
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Tortora, C.; Brescia, M.; Longo, G.; Radovich, M.; Napolitano, N. R.; Amaro, V.; Vellucci, C.; La Barbera, F.; Getman, F.; Grado, A.
2017-04-01
Photometric redshifts (photo-z) are fundamental in galaxy surveys to address different topics, from gravitational lensing and dark matter distribution to galaxy evolution. The Kilo Degree Survey (KiDS), I.e. the European Southern Observatory (ESO) public survey on the VLT Survey Telescope (VST), provides the unprecedented opportunity to exploit a large galaxy data set with an exceptional image quality and depth in the optical wavebands. Using a KiDS subset of about 25000 galaxies with measured spectroscopic redshifts, we have derived photo-z using (I) three different empirical methods based on supervised machine learning; (II) the Bayesian photometric redshift model (or BPZ); and (III) a classical spectral energy distribution (SED) template fitting procedure (LE PHARE). We confirm that, in the regions of the photometric parameter space properly sampled by the spectroscopic templates, machine learning methods provide better redshift estimates, with a lower scatter and a smaller fraction of outliers. SED fitting techniques, however, provide useful information on the galaxy spectral type, which can be effectively used to constrain systematic errors and to better characterize potential catastrophic outliers. Such classification is then used to specialize the training of regression machine learning models, by demonstrating that a hybrid approach, involving SED fitting and machine learning in a single collaborative framework, can be effectively used to improve the accuracy of photo-z estimates.
The refractive index of krypton for lambda in the closed interval 168-288 nm
NASA Technical Reports Server (NTRS)
Smith, P. L.; Parkinson, W. H.; Huber, M. C. E.
1975-01-01
The index of refraction of krypton has been measured at 27 wavelengths between and including 168 and 288 nm. The probable error of each measurement is plus or minus 0.1%. Our results are compared with other measurements. Our data are about 3.8% smaller than those of Abjean et al.
AQMEII3: the EU and NA regional scale program of the ...
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission
NASA Technical Reports Server (NTRS)
Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.
2014-01-01
Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424
A multi points ultrasonic detection method for material flow of belt conveyor
NASA Astrophysics Data System (ADS)
Zhang, Li; He, Rongjun
2018-03-01
For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.
Abundance profiling of extremely metal-poor stars and supernova properties in the early universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tominaga, Nozomu; Iwamoto, Nobuyuki; Nomoto, Ken'ichi, E-mail: tominaga@konan-u.ac.jp, E-mail: iwamoto.nobuyuki@jaea.go.jp, E-mail: nomoto@astron.s.u-tokyo.ac.jp
2014-04-20
After the big bang nucleosynthesis, the first heavy element enrichment in the universe was made by a supernova (SN) explosion of a population (Pop) III star (Pop III SN). The abundance ratios of elements produced from Pop III SNe are recorded in abundance patterns of extremely metal-poor (EMP) stars. The observations of the increasing number of EMP stars have made it possible to statistically constrain the explosion properties of Pop III SNe. We present Pop III SN models whose nucleosynthesis yields well reproduce, individually, the abundance patterns of 48 such metal-poor stars as [Fe/H] ≲ – 3.5. We then derivemore » relations between the abundance ratios of EMP stars and certain explosion properties of Pop III SNe: the higher [(C + N)/Fe] and [(C + N)/Mg] ratios correspond to the smaller ejected Fe mass and the larger compact remnant mass, respectively. Using these relations, the distributions of the abundance ratios of EMP stars are converted to those of the explosion properties of Pop III SNe. Such distributions are compared with those of the explosion properties of present day SNe: the distribution of the ejected Fe mass of Pop III SNe has the same peak as that of the present day SNe but shows an extended tail down to ∼10{sup –2}-10{sup –5} M {sub ☉}, and the distribution of the mass of the compact remnant of Pop III SNe is as wide as that of the present-day, stellar-mass black holes. Our results demonstrate the importance of large samples of EMP stars obtained by ongoing and future EMP star surveys and subsequent high-dispersion spectroscopic observations in clarifying the nature of Pop III SNe in the early universe.« less
Zhang, Yachao; Yang, Yang; Jiang, Hong
2013-12-12
The 3d-4f exchange interaction plays an important role in many lanthanide based molecular magnetic materials such as single-molecule magnets and magnetic refrigerants. In this work, we study the 3d-4f magnetic exchange interactions in a series of Cu(II)-Gd(III) (3d(9)-4f(7)) dinuclear complexes based on the numerical atomic basis-norm-conserving pseudopotential method and density functional theory plus the Hubbard U correction approach (DFT+U). We obtain improved description of the 4f electrons by including the semicore 5s5p states in the valence part of the Gd-pseudopotential. The Hubbard U correction is employed to treat the strongly correlated Cu-3d and Gd-4f electrons, which significantly improve the agreement of the predicted exchange constants, J, with experiment, indicating the importance of accurate description of the local Coulomb correlation. The high efficiency of the DFT+U approach enables us to perform calculations with molecular crystals, which in general improve the agreement between theory and experiment, achieving a mean absolute error smaller than 2 cm(-1). In addition, through analyzing the physical effects of U, we identify two magnetic exchange pathways. One is ferromagnetic and involves an interaction between the Cu-3d, O-2p (bridge ligand), and the majority-spin Gd-5d orbitals. The other one is antiferromagnetic and involves Cu-3d, O-2p, and the empty minority-spin Gd-4f orbitals, which is suppressed by the planar Cu-O-O-Gd structure. This study demonstrates the accuracy of the DFT+U method for evaluating the 3d-4f exchange interactions, provides a better understanding of the exchange mechanism in the Cu(II)-Gd(III) complexes, and paves the way for exploiting the magnetic properties of the 3d-4f compounds containing lanthanides other than Gd.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models
de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo
2018-01-01
Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857
ECG fiducial point extraction using switching Kalman filter.
Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian
2018-04-01
In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.
Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.
Korth, Martin
2013-10-14
The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y
2013-08-29
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
Result from a new air pollution model were tested against data from the Southern California Air Quality Study (SCAQS) period of 26-29 August 1987. Gross errors for sulfate, sodium, light absorption, temperatures, surface solar radiation, sulfur dioxide gas, formaldehyde gas, and ...
Application of Uniform Measurement Error Distribution
2016-03-18
subrata.sanyal@navy.mil Point of Contact: Measurement Science & Engineering Department Operations (Code: MS02) P.O. Box 5000 Corona , CA 92878... Corona , California 92878-5000 March 18, 2016 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited...NSWC Corona Public Release Control Number 16-005) NSWCCORDIV/RDTR-2016-005 iii
77 FR 32400 - Fenamidone; Pesticide Tolerance; Technical Amendment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
...., of the preamble, the text correctly listed the tolerance level for the commodity ``strawberry'' at 0... the tolerance level for ``strawberry'' at 0.15. This technical amendment corrects that error. III. Why... revising the entry for ``Strawberry'' in paragraph (d) to read as follows: Sec. [emsp14]180.579 Fenamidone...
77 FR 60917 - Trinexapac-ethyl; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... ``hog, meat by-products'' in order to correct inadvertent errors in the final rule tolerance table for...'' is revised to ``hog, meat by-products.'' V. Statutory and Executive Order Reviews This final rule... alphabetical order an entry for ``Hog, meat by-products''. 0 iii. Revising the entries for ``Wheat, forage...
Using the Saturn V and Titan III Vibroacoustic Databanks for Random Vibration Criteria Development
NASA Technical Reports Server (NTRS)
Ferbee, R C.
2009-01-01
This is an update to TN D-7159, "Development and Application of Vibroacoustic Structural Data Banks in Predicting Vibration Design and Test Criteria for Rocket Vehicle Structures", which was originally published in 1973. Errors in the original document have been corrected and additional data from the Titan III program have been included. Methods for using the vibroacoustic databanks for vibration test criteria development are shown, as well as all of the data with drawings and pictures of the measurement locations. An Excel spreadsheet with the data included is available from the author.
Accuracy of measurement in electrically evoked compound action potentials.
Hey, Matthias; Müller-Deile, Joachim
2015-01-15
Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.
AQMEII3 evaluation of regional NA/EU simulations and ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac
Recursive least squares estimation and its application to shallow trench isolation
NASA Astrophysics Data System (ADS)
Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.
2003-06-01
In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.
Progress towards 3-cell superconducting traveling wave cavity cryogenic test
NASA Astrophysics Data System (ADS)
Kostin, R.; Avrakhov, P.; Kanareykin, A.; Yakovlev, V.; Solyak, N.
2017-12-01
This paper describes a superconducting L-band travelling wave cavity for electron linacs as an alternative to the 9-cell superconducting standing wave Tesla type cavity. A superconducting travelling wave cavity may provide 20-40% higher accelerating gradient by comparison with conventional cavities. This feature arises from an opportunity to use a smaller phase advance per cell which increases the transit time factor and affords the opportunity to use longer cavities because of its significantly smaller sensitivity to manufacturing errors. Two prototype superconducting travelling wave cavities were designed and manufactured for a high gradient travelling wave demonstration at cryogenic temperature. This paper presents the main milestones achieved towards this test.
Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru
2015-01-01
Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179
Impact of geophysical model error for recovering temporal gravity field model
NASA Astrophysics Data System (ADS)
Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang
2016-07-01
The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.
Regions of pollution with particulate matter in Poland
NASA Astrophysics Data System (ADS)
Rawicki, Kacper; Czarnecka, Małgorzata; Nidzgorska-Lencewicz, Jadwiga
2018-01-01
The study presents the temporal and spatial variability of particulate matter concentration in Poland in the calendar winter season (December-February). The basis for the study were the hourly and daily values of particulate matter PM10 concentration from the period 2005/06 - 2014/15, obtained from 33 air pollution monitoring stations. In Poland, the obligation to monitor the concentration of the finer fraction of particles smaller than 2.5µm in aerodynamic diameter was introduced only in 2010. Consequently, data on PM2.5 concentration refer to a shorter period, i.e. 2009/10 - 2014/15, and were obtained from 23 stations. Using the cluster analysis (k-means method), three regions of comparable variability of particulate matter concentration were delineated. The largest region, i.e. Region I, comprises the northern and eastern central area of Poland, and its southern boundary is along the line Gorzów Wlkp-Bydgoszcz-Konin-Łódź-Kielce-Lublin. Markedly smaller Region II is located to the south of Region I. By far the smallest area was designated to Region III which covers the south west area of Poland. The delineated regions show a marked variability in terms of mean concentration of both PM fractions in winter (PM10: region I - 33 µg·m-3, region II - 55 µg·m-3, region III - 83 µg·m-3; PM2,5: region I - 35 µg·m-3, region II - 50 µg·m-3, region III - 60 µg·m-3) and, in the case of PM10, the frequency of excessive daily limit value.
Effect of Population III Multiplicity on Dark Star Formation
NASA Technical Reports Server (NTRS)
Stacy, Athena; Pawlik, Andreas H.; Bromm, Volker; Loeb, Abraham
2012-01-01
We numerically study the mutual interaction between dark matter (DM) and Population III (Pop III) stellar systems in order to explore the possibility of Pop III dark stars within this physical scenario. We perform a cosmological simulation, initialized at z approx. 100, which follows the evolution of gas and DM. We analyze the formation of the first mini halo at z approx. 20 and the subsequent collapse of the gas to densities of 10(exp 12)/cu cm. We then use this simulation to initialize a set of smaller-scale 'cut-out' simulations in which we further refine the DM to have spatial resolution similar to that of the gas. We test multiple DM density profiles, and we employ the sink particle method to represent the accreting star-forming region. We find that, for a range of DM configurations, the motion of the Pop III star-disk system serves to separate the positions of the protostars with respect to the DM density peak, such that there is insufficient DM to influence the formation and evolution of the protostars for more than approx. 5000 years. In addition, the star-disk system causes gravitational scattering of the central DM to lower densities, further decreasing the influence of DM over time. Any DM-powered phase of Pop III stars will thus be very short-lived for the typical multiple system, and DM will not serve to significantly prolong the life of Pop III stars.
Adaptation of the Nelson-Somogyi reducing-sugar assay to a microassay using microtiter plates.
Green, F; Clausen, C A; Highley, T L
1989-11-01
The Nelson-Somogyi assay for reducing sugars was adapted to microtiter plates. The primary advantages of this modified assay are (i) smaller sample and reagent volumes, (ii) elimination of boiling and filtration steps, (iii) automated measurement with a dual-wavelength scanning TLC densitometer, (iv) increased range and reproducibility, and (v) automated colorimetric readings by reflectance rather than absorbance.
Wigg, Jonathan P.; Zhang, Hong; Yang, Dong
2015-01-01
Introduction In-vivo imaging of choroidal neovascularization (CNV) has been increasingly recognized as a valuable tool in the investigation of age-related macular degeneration (AMD) in both clinical and basic research applications. Arguably the most widely utilised model replicating AMD is laser generated CNV by rupture of Bruch’s membrane in rodents. Heretofore CNV evaluation via in-vivo imaging techniques has been hamstrung by a lack of appropriate rodent fundus camera and a non-standardised analysis method. The aim of this study was to establish a simple, quantifiable method of fluorescein fundus angiogram (FFA) image analysis for CNV lesions. Methods Laser was applied to 32 Brown Norway Rats; FFA images were taken using a rodent specific fundus camera (Micron III, Phoenix Laboratories) over 3 weeks and compared to conventional ex-vivo CNV assessment. FFA images acquired with fluorescein administered by intraperitoneal injection and intravenous injection were compared and shown to greatly influence lesion properties. Utilising commonly used software packages, FFA images were assessed for CNV and chorioretinal burns lesion area by manually outlining the maximum border of each lesion and normalising against the optic nerve head. Net fluorescence above background and derived value of area corrected lesion intensity were calculated. Results CNV lesions of rats treated with anti-VEGF antibody were significantly smaller in normalised lesion area (p<0.001) and fluorescent intensity (p<0.001) than the PBS treated control two weeks post laser. The calculated area corrected lesion intensity was significantly smaller (p<0.001) in anti-VEGF treated animals at 2 and 3 weeks post laser. The results obtained using FFA correlated with, and were confirmed by conventional lesion area measurements from isolectin stained choroidal flatmounts, where lesions of anti-VEGF treated rats were significantly smaller at 2 weeks (p = 0.049) and 3 weeks (p<0.001) post laser. Conclusion The presented method of in-vivo FFA quantification of CNV, including acquisition variable corrections, using the Micron III system and common use software establishes a reliable method for detecting and quantifying CNV enabling longitudinal studies and represents an important alternative to conventional CNV quantification methods. PMID:26024231
Stellar winds and coronae of low-mass Population II/III stars
NASA Astrophysics Data System (ADS)
Suzuki, Takeru K.
2018-06-01
We investigated stellar winds from zero-/low-metallicity low-mass stars by magnetohydrodynamical simulations for stellar winds driven by Alfvén waves from stars with mass M = (0.6-0.8) M⊙ and metallicity Z = (0-1) Z⊙, where M⊙ and Z⊙ are the solar mass and metallicity, respectively. Alfvénic waves, which are excited by the surface convection, travel upward from the photosphere and heat up the corona by their dissipation. For lower Z, denser gas can be heated up to the coronal temperature because of the inefficient radiation cooling. The coronal density of Population II/III stars with Z ≤ 0.01 Z⊙ is one to two orders of magnitude larger than that of a solar-metallicity star with the same mass, and as a result, the mass loss rate, \\dot{M}, is 4.5-20 times larger. This indicates that metal accretion on low-mass Pop. III stars is negligible. The soft X-ray flux of the Pop. II/III stars is also expected to be ˜1-30 times larger than that of a solar-metallicity counterpart owing to the larger coronal density, even though the radiation cooling efficiency is smaller. A larger fraction of the input Alfvénic wave energy is transmitted to the corona in low-Z stars because they avoid severe reflection owing to the smaller density difference between the photosphere and the corona. Therefore, a larger fraction is converted to the thermal energy of the corona and the kinetic energy of the stellar wind. From this energetics argument, we finally derived a scaling of \\dot{M} as \\dot{M}∝ L R_{\\star }^{11/9} M_{\\star }^{-10/9} T_eff^{11/2}[\\max (Z/Z_{⊙},0.01)]^{-1/5}, where L, R⋆, and Teff are the stellar luminosity, radius, and effective temperature, respectively.
Stellar winds and coronae of low-mass Population II/III stars
NASA Astrophysics Data System (ADS)
Suzuki, Takeru K.
2018-04-01
We investigated stellar winds from zero-/low-metallicity low-mass stars by magnetohydrodynamical simulations for stellar winds driven by Alfvén waves from stars with mass M = (0.6-0.8) M⊙ and metallicity Z = (0-1) Z⊙, where M⊙ and Z⊙ are the solar mass and metallicity, respectively. Alfvénic waves, which are excited by the surface convection, travel upward from the photosphere and heat up the corona by their dissipation. For lower Z, denser gas can be heated up to the coronal temperature because of the inefficient radiation cooling. The coronal density of Population II/III stars with Z ≤ 0.01 Z⊙ is one to two orders of magnitude larger than that of a solar-metallicity star with the same mass, and as a result, the mass loss rate, \\dot{M}, is 4.5-20 times larger. This indicates that metal accretion on low-mass Pop. III stars is negligible. The soft X-ray flux of the Pop. II/III stars is also expected to be ˜1-30 times larger than that of a solar-metallicity counterpart owing to the larger coronal density, even though the radiation cooling efficiency is smaller. A larger fraction of the input Alfvénic wave energy is transmitted to the corona in low-Z stars because they avoid severe reflection owing to the smaller density difference between the photosphere and the corona. Therefore, a larger fraction is converted to the thermal energy of the corona and the kinetic energy of the stellar wind. From this energetics argument, we finally derived a scaling of \\dot{M} as \\dot{M}∝ L R_{\\star }^{11/9} M_{\\star }^{-10/9} T_eff^{11/2}[\\max (Z/Z_{⊙},0.01)]^{-1/5}, where L, R⋆, and Teff are the stellar luminosity, radius, and effective temperature, respectively.
Cell-type-dependent action potentials and voltage-gated currents in mouse fungiform taste buds.
Kimura, Kenji; Ohtubo, Yoshitaka; Tateno, Katsumi; Takeuchi, Keita; Kumazawa, Takashi; Yoshii, Kiyonori
2014-01-01
Taste receptor cells fire action potentials in response to taste substances to trigger non-exocytotic neurotransmitter release in type II cells and exocytotic release in type III cells. We investigated possible differences between these action potentials fired by mouse taste receptor cells using in situ whole-cell recordings, and subsequently we identified their cell types immunologically with cell-type markers, an IP3 receptor (IP3 R3) for type II cells and a SNARE protein (SNAP-25) for type III cells. Cells not immunoreactive to these antibodies were examined as non-IRCs. Here, we show that type II cells and type III cells fire action potentials using different ionic mechanisms, and that non-IRCs also fire action potentials with either of the ionic mechanisms. The width of action potentials was significantly narrower and their afterhyperpolarization was deeper in type III cells than in type II cells. Na(+) current density was similar in type II cells and type III cells, but it was significantly smaller in non-IRCs than in the others. Although outwardly rectifying current density was similar between type II cells and type III cells, tetraethylammonium (TEA) preferentially suppressed the density in type III cells and the majority of non-IRCs. Our mathematical model revealed that the shape of action potentials depended on the ratio of TEA-sensitive current density and TEA-insensitive current one. The action potentials of type II cells and type III cells under physiological conditions are discussed. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
O-H bond oxidation by a monomeric Mn(III)-OMe complex.
Wijeratne, Gayan B; Day, Victor W; Jackson, Timothy A
2015-02-21
Manganese-containing, mid-valent oxidants (Mn(III)-OR) that mediate proton-coupled electron-transfer (PCET) reactions are central to a variety of crucial enzymatic processes. The Mn-dependent enzyme lipoxygenase is such an example, where a Mn(III)-OH unit activates fatty acid substrates for peroxidation by an initial PCET. This present work describes the quantitative generation of the Mn(III)-OMe complex, [Mn(III)(OMe)(dpaq)](+) (dpaq = 2-[bis(pyridin-2-ylmethyl)]amino-N-quinolin-8-yl-acetamidate) via dioxygen activation by [Mn(II)(dpaq)](+) in methanol at 25 °C. The X-ray diffraction structure of [Mn(III)(OMe)(dpaq)](+) exhibits a Mn-OMe group, with a Mn-O distance of 1.825(4) Å, that is trans to the amide functionality of the dpaq ligand. The [Mn(III)(OMe)(dpaq)](+) complex is quite stable in solution, with a half-life of 26 days in MeCN at 25 °C. [Mn(III)(OMe)(dpaq)](+) can activate phenolic O-H bonds with bond dissociation free energies (BDFEs) of less than 79 kcal mol(-1) and reacts with the weak O-H bond of TEMPOH (TEMPOH = 2,2'-6,6'-tetramethylpiperidine-1-ol) with a hydrogen/deuterium kinetic isotope effect (H/D KIE) of 1.8 in MeCN at 25 °C. This isotope effect, together with other experimental evidence, is suggestive of a concerted proton-electron transfer (CPET) mechanism for O-H bond oxidation by [Mn(III)(OMe)(dpaq)](+). A kinetic and thermodynamic comparison of the O-H bond oxidation reactivity of [Mn(III)(OMe)(dpaq)](+) to other M(III)-OR oxidants is presented as an aid to gain more insight into the PCET reactivity of mid-valent oxidants. In contrast to high-valent counterparts, the limited examples of M(III)-OR oxidants exhibit smaller H/D KIEs and show weaker dependence of their oxidation rates on the driving force of the PCET reaction with O-H bonds.
Investigations of interpolation errors of angle encoders for high precision angle metrology
NASA Astrophysics Data System (ADS)
Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa
2018-06-01
Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.
Highly improved staggered quarks on the lattice with applications to charm physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follana, E.; Davies, C.; Wong, K.
2007-03-01
We use perturbative Symanzik improvement to create a new staggered-quark action (HISQ) that has greatly reduced one-loop taste-exchange errors, no tree-level order a{sup 2} errors, and no tree-level order (am){sup 4} errors to leading order in the quark's velocity v/c. We demonstrate with simulations that the resulting action has taste-exchange interactions that are 3-4 times smaller than the widely used ASQTAD action. We show how to bound errors due to taste exchange by comparing ASQTAD and HISQ simulations, and demonstrate with simulations that such errors are likely no more than 1% when HISQ is used for light quarks at latticemore » spacings of 1/10 fm or less. The suppression of (am){sup 4} errors also makes HISQ the most accurate discretization currently available for simulating c quarks. We demonstrate this in a new analysis of the {psi}-{eta}{sub c} mass splitting using the HISQ action on lattices where am{sub c}=0.43 and 0.66, with full-QCD gluon configurations (from MILC). We obtain a result of 111(5) MeV which compares well with the experiment. We discuss applications of this formalism to D physics and present our first high-precision results for D{sub s} mesons.« less
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, C.; Beardsley, R. C.; Gao, G.; Qi, J.; Lin, H.
2016-02-01
A high-resolution (up to 2 km), unstructured-grid, fully ice-sea coupled Arctic Ocean Finite-Volume Community Ocean Model (AO-FVCOM) was used to simulate the Arctic sea ice over the period 1978-2014. Good agreements were found between simulated and observed sea ice extent, concentration, drift velocity and thickness, indicating that the AO-FVCOM captured not only the seasonal and interannual variability but also the spatial distribution of the sea ice in the Arctic in the past 37 years. Compared with other six Arctic Ocean models (ECCO2, GSFC, INMOM, ORCA, NAME and UW), the AO-FVCOM-simulated ice thickness showed a higher correlation coefficient and a smaller difference with observations. An effort was also made to examine the physical processes attributing to the model-produced bias in the sea ice simulation. The error in the direction of the ice drift velocity was sensitive to the wind turning angle; smaller when the wind was stronger, but larger when the wind was weaker. This error could lead to the bias in the near-surface current in the fully or partially ice-covered zone where the ice-sea interfacial stress was a major driving force.
Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
Observing human movements helps decoding environmental forces.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-11-01
Vision of human actions can affect several features of visual motion processing, as well as the motor responses of the observer. Here, we tested the hypothesis that action observation helps decoding environmental forces during the interception of a decelerating target within a brief time window, a task intrinsically very difficult. We employed a factorial design to evaluate the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). Button-press triggered the motion of a bullet, a piston, or a human arm. We found that the timing errors were smaller for upright scenes irrespective of gravity direction in the Bullet group, while the errors were smaller for the standard condition of normal scene and gravity in the Piston group. In the Arm group, instead, performance was better when the directions of scene and target gravity were concordant, irrespective of whether both were upright or inverted. These results suggest that the default viewer-centered reference frame is used with inanimate scenes, such as those of the Bullet and Piston protocols. Instead, the presence of biological movements in animate scenes (as in the Arm protocol) may help processing target kinematics under the ecological conditions of coherence between scene and target gravity directions.
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittmann, Christoffer; Sych, Denis; Leuchs, Gerd
2010-06-15
We investigate quantum measurement strategies capable of discriminating two coherent states probabilistically with significantly smaller error probabilities than can be obtained using nonprobabilistic state discrimination. We apply a postselection strategy to the measurement data of a homodyne detector as well as a photon number resolving detector in order to lower the error probability. We compare the two different receivers with an optimal intermediate measurement scheme where the error rate is minimized for a fixed rate of inconclusive results. The photon number resolving (PNR) receiver is experimentally demonstrated and compared to an experimental realization of a homodyne receiver with postselection. Inmore » the comparison, it becomes clear that the performance of the PNR receiver surpasses the performance of the homodyne receiver, which we prove to be optimal within any Gaussian operations and conditional dynamics.« less
Wu, S.-S.; Wang, L.; Qiu, X.
2008-01-01
This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Evaluation of 4D-CT lung registration.
Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W
2009-01-01
Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.
Experimental Study on the Axis Line Deflection of Ti6A14V Titanium Alloy in Gun-Drilling Process
NASA Astrophysics Data System (ADS)
Li, Liang; Xue, Hu; Wu, Peng
2018-01-01
Titanium alloy is widely used in aerospace industry, but it is also a typical difficult-to-cut material. During Deep hole drilling of the shaft parts of a certain large aircraft, there are problems of bad surface roughness, chip control and axis deviation, so experiments on gun-drilling of Ti6A14V titanium alloy were carried out to measure the axis line deflection, diameter error and surface integrity, and the reasons of these errors were analyzed. Then, the optimized process parameter was obtained during gun-drilling of Ti6A14V titanium alloy with deep hole diameter of 17mm. Finally, we finished the deep hole drilling of 860mm while the comprehensive error is smaller than 0.2mm and the surface roughness is less than 1.6μm.
The development rainfall forecasting using kalman filter
NASA Astrophysics Data System (ADS)
Zulfi, Mohammad; Hasan, Moh.; Dwidja Purnomo, Kosala
2018-04-01
Rainfall forecasting is very interesting for agricultural planing. Rainfall information is useful to make decisions about the plan planting certain commodities. In this studies, the rainfall forecasting by ARIMA and Kalman Filter method. Kalman Filter method is used to declare a time series model of which is shown in the form of linear state space to determine the future forecast. This method used a recursive solution to minimize error. The rainfall data in this research clustered by K-means clustering. Implementation of Kalman Filter method is for modelling and forecasting rainfall in each cluster. We used ARIMA (p,d,q) to construct a state space for KalmanFilter model. So, we have four group of the data and one model in each group. In conclusions, Kalman Filter method is better than ARIMA model for rainfall forecasting in each group. It can be showed from error of Kalman Filter method that smaller than error of ARIMA model.
Wang, Wansheng; Chen, Long; Zhou, Jie
2015-01-01
A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063
Single molecule sequencing-guided scaffolding and correction of draft assemblies.
Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J
2017-12-06
Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.
Comparing Zeeman qubits to hyperfine qubits in the context of the surface code: +174Yb and +171Yb
NASA Astrophysics Data System (ADS)
Brown, Natalie C.; Brown, Kenneth R.
2018-05-01
Many systems used for quantum computing possess additional states beyond those defining the qubit. Leakage out of the qubit subspace must be considered when designing quantum error correction codes. Here we consider trapped ion qubits manipulated by Raman transitions. Zeeman qubits do not suffer from leakage errors but are sensitive to magnetic fields to first order. Hyperfine qubits can be encoded in clock states that are insensitive to magnetic fields to first order, but spontaneous scattering during the Raman transition can lead to leakage. Here we compare a Zeeman qubit (+174Yb) to a hyperfine qubit (+171Yb) in the context of the surface code. We find that the number of physical qubits required to reach a specific logical qubit error can be reduced by using +174Yb if the magnetic field can be stabilized with fluctuations smaller than 10 μ G .
Measuring Parameters of Massive Black Hole Binaries with Partially-Aligned Spins
NASA Technical Reports Server (NTRS)
Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.
2010-01-01
It is important to understand how well the gravitational-wave observatory LISA can measure parameters of massive black hole binaries. It has been shown that including spin precession in the waveform breaks degeneracies and produces smaller expected parameter errors than a simpler, precession-free analysis. However, recent work has shown that gas in binaries can partially align the spins with the orbital angular momentum, thus reducing the precession effect. We show how this degrades the earlier results, producing more pessimistic errors in gaseous mergers. However, we then add higher harmonics to the signal model; these also break degeneracies, but they are not affected by the presence of gas. The harmonics often restore the errors in partially-aligned binaries to the same as, or better than/ those that are obtained for fully precessing binaries with no harmonics. Finally, we investigate what LISA measurements of spin alignment can tell us about the nature of gas around a binary,
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Reduced discretization error in HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Insight into biases and sequencing errors for amplicon sequencing with the Illumina MiSeq platform.
Schirmer, Melanie; Ijaz, Umer Z; D'Amore, Rosalinda; Hall, Neil; Sloan, William T; Quince, Christopher
2015-03-31
With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
[Development of Micro-Spectrometer with a Function of Timely Temperature Compensation].
Bao, Jian-guang; Liu, Zheng-kun; Chen, Huo-yao; Lin, Ji-ping; Fu, Shao-jun
2015-05-01
Temperature drift will be brought to Micro-Spectrometer used for demodulating the Varied Line-Space(VLS) grating position sensor on aircraft due to high-low temperature shock. We successfully made a Micro-Spectrometer, for the VLS grating position sensor on aircraft, which still have stable output under temperature shock enviro nment. In order to present a real time temperature compensation scheme, the effects temperature change has on Micro-Spectrometer are analyzed and the traditional cross Czerny-Turner (C-T)optical structure is optimized. Both optical structures are analyzed by optics design software ZEMAX and proved that comparedwithtraditional cross C-T optical structure, the newone can accomplish not only smaller spectrum drift but also spectrum drift with better linearity. Based on the new optical structure. The scheme of using reference wavelength to accomplish real time temperature compensation was proposed and a Micro-fiber Spectrometer was successfully manufactured, whith is with Volume of 80 mm X 70 mmX 70 mm, integration time of 8 ~1 000 ms and FullWidthHalfMaximum(FWHM) of 2 nm. Experiments show that the new spectrometer meets the design requirement. Under high temperature in the range of nearly 60 °C, the standard error of wavelength of this new spectrometer is smaller than 0. 1 nm, and the maximum error of wavelength is 0. 14 nm, which is much smaller than required 0. 3 nm. Innovations of this paper are the schemeof real time temperature compensation, the new cross C-T optical structure and a Micro-fiber Spectrometer based on it.
Development of Anthropometry-Based Equations for the Estimation of the Total Body Water in Koreans
Lee, Seoung Woo; Kim, Gyeong A; Lim, Hee Jung; Lee, Sun Young; Park, Geun Ho; Song, Joon Ho
2005-01-01
For developing race-specific anthropometry-based total body water (TBW) equations, we measured TBW using bioelectrical impedance analysis (TBWBIA) in 2,943 healthy Korean adults. Among them, 2,223 were used as a reference group. Two equations (TBWK1 and TBWK2) were developed based on age, sex, height, and body weight. The adjusted R2 was 0.908 for TBWK1 and 0.910 for TBWK2. The remaining 720 subjects were used for the validation of our results. Watson (TBWW) and Hume-Weyers (TBWH) formulas were also used. In men, TBWBIA showed the highest correlation with TBWH, followed by TBWK1, TBWK2 and TBWW. TBWK1 and TBWK2 showed the lower root mean square errors (RMSE) and mean prediction errors (ME) than TBWW and TBWH. On the Bland-Altman plot, the correlations between the differences and means were smaller for TBWK2 than for TBWK1. On the contrary, TBWBIA showed the highest correlation with TBWW, followed by TBWK2, TBWK1, and TBWH in females. RMSE was smallest in TBWW, followed by TBWK2, TBWK1 and TBWH. ME was closest to zero for TBWK2, followed by TBWK1, TBWW and TBWH. The correlation coefficients between the means and differences were highest in TBWW, and lowest in TBWK2. In conclusion, TBWK2 provides better accuracy with a smaller bias than the TBWW or TBWH in males. TBWK2 shows a similar accuracy, but with a smaller bias than TBWW in females. PMID:15953867
Sex Bias in Research and Measurement: A Type III Error.
ERIC Educational Resources Information Center
Project on Sex Stereotyping in Education, Red Bank, NJ.
The module described in this document is part of a series of instructional modules on sex-role stereotyping in education. This document (including all but the cassette tape) is the module that examines how sex bias influences selection of research topics, sampling techniques, interpretation of data, and conclusions. Suggestions for designing…
14 CFR 121.646 - En-route fuel supply: flag and supplemental operations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... supply requirements of § 121.333; and (iii) Considering expected wind and other weather conditions. (3..., considering wind and other weather conditions expected, it has the fuel otherwise required by this part and... errors in wind forecasting. In calculating the amount of fuel required by paragraph (b)(1)(i) of this...
14 CFR 121.646 - En-route fuel supply: flag and supplemental operations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... supply requirements of § 121.333; and (iii) Considering expected wind and other weather conditions. (3..., considering wind and other weather conditions expected, it has the fuel otherwise required by this part and... errors in wind forecasting. In calculating the amount of fuel required by paragraph (b)(1)(i) of this...
14 CFR 121.646 - En-route fuel supply: flag and supplemental operations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... supply requirements of § 121.333; and (iii) Considering expected wind and other weather conditions. (3..., considering wind and other weather conditions expected, it has the fuel otherwise required by this part and... errors in wind forecasting. In calculating the amount of fuel required by paragraph (b)(1)(i) of this...
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2011 CFR
2011-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of... 1040.11 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED...
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2010 CFR
2010-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of... 1040.11 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED...
12 CFR 208.3 - Application and conditions for membership in the Federal Reserve System.
Code of Federal Regulations, 2010 CFR
2010-01-01
...: (1) Financial condition and management. The financial history and condition of the applying bank and the general character of its management. (2) Capital. The adequacy of the bank's capital in accordance....3(c)(1); (iii) The application contains a material error or is otherwise deficient; or (iv) The...
2013-11-01
difference between front and rear was less pronounced. Localization errors near 0° and 180° are dominated by front-back confusions because binaural ...used to disambiguate binaural information; therefore, it can be argued that most differences in auditory localization ability resulting from
Elmer, Lawrence W; Juncos, Jorge L; Singer, Carlos; Truong, Daniel D; Criswell, Susan R; Parashos, Sotirios; Felt, Larissa; Johnson, Reed; Patni, Rajiv
2018-04-01
An Online First version of this article was made available online at http://link.springer.com/journal/40263/onlineFirst/page/1 on 12 March 2018. An error was subsequently identified in the article, and the following correction should be noted.
Patient safety awareness among Undergraduate Medical Students in Pakistani Medical School.
Kamran, Rizwana; Bari, Attia; Khan, Rehan Ahmed; Al-Eraky, Mohamed
2018-01-01
To measure the level of awareness of patient safety among undergraduate medical students in Pakistani Medical School and to find the difference with respect to gender and prior experience with medical error. This cross-sectional study was conducted at the University of Lahore (UOL), Pakistan from January to March 2017, and comprised final year medical students. Data was collected using a questionnaire 'APSQ- III' on 7 point Likert scale. Eight questions were reverse coded. Survey was anonymous. SPSS package 20 was used for statistical analysis. Questionnaire was filled by 122 students, with 81% response rate. The best score 6.17 was given for the 'team functioning', followed by 6.04 for 'long working hours as a cause of medical error'. The domains regarding involvement of patient, confidence to report medical errors and role of training and learning on patient safety scored high in the agreed range of >5. Reverse coded questions about 'professional incompetence as an error cause' and 'disclosure of errors' showed negative perception. No significant differences of perceptions were found with respect to gender and prior experience with medical error (p= >0.05). Undergraduate medical students at UOL had a positive attitude towards patient safety. However, there were misconceptions about causes of medical errors and error disclosure among students and patient safety education needs to be incorporated in medical curriculum of Pakistan.
Comparative assessment of the methods for exchangeable acidity measuring
NASA Astrophysics Data System (ADS)
Vanchikova, E. V.; Shamrikova, E. V.; Bespyatykh, N. V.; Zaboeva, G. A.; Bobrova, Yu. I.; Kyz"yurova, E. V.; Grishchenko, N. V.
2016-05-01
A comparative assessment of the results of measuring the exchangeable acidity and its components by different methods was performed for the main mineral genetic horizons of texturally-differentiated gleyed and nongleyed soddy-podzolic and gley-podzolic soils of the Komi Republic. It was shown that the contents of all the components of exchangeable soil acidity determined by the Russian method (with potassium chloride solution as extractant, c(KCl) = 1 mol/dm3) were significantly higher than those obtained by the international method (with barium chloride solution as extractant, c(BaCl2) = 0.1 mol/dm3). The error of the estimate of the concentration of H+ ions extracted with barium chloride solution equaled 100%, and this allowed only qualitative description of this component of the soil acidity. In the case of the extraction with potassium chloride, the error of measurements was 50%. It was also shown that the use of potentiometric titration suggested by the Russian method overestimates the results of soil acidity measurement caused by the exchangeable metal ions (Al(III), Fe(III), and Mn(II)) in comparison with the atomic emission method.
NASA Astrophysics Data System (ADS)
Russell, C. T.; Yu, Z. J.; Kivelson, M. G.; Khurana, K. K.
2000-10-01
The System III (1965.0) rotation period of Jupiter, as defined by the IAU based on early radio astronomical data, is 9h 55m 29.71s. Higgins et al. (JGR, 22033, 1997) have suggested, based on more recent radio data, that this period is too high by perhaps 25 ms. In the 25 years since the Pioneer and Voyager measurements, such an error would cause a 6 degree shift in apparent longitude of features tied to the internal magnetic field. A comparison of the longitude of the projection of the dipole moment obtained over the period 1975-1979 with that obtained by Galileo today shows that the average dipole location has drifted only one degree eastward in System III (1965.0). This one-degree shift is not significant given the statistical errors. A possible resolution to this apparent paradox is that the dipole moment observation is sensitive to the lower order field while the radio measurement is sensitive to the high order field at low altitude. Estimates of the secular variation from the in situ data are being pursued.
Analysis for nickel (3 and 4) in positive plates from nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Lewis, Harlan L.
1994-01-01
The NASA-Goddard procedure for destructive physical analysis (DPA) of nickel-cadmium cells contains a method for analysis of residual charged nickel as NiOOH in the positive plates at complete cell discharge, also known as nickel precharge. In the method, the Ni(III) is treated with an excess of an Fe(II) reducing agent and then back titrated with permanganate. The Ni(III) content is the difference between Fe(II) equivalents and permanganate equivalents. Problems have arisen in analysis at NAVSURFWARCENDIV, Crane because for many types of cells, particularly AA-size and some 'space-qualified' cells, zero or negative Ni(III) contents are recorded for which the manufacturer claims 3-5 percent precharge. Our approach to this problem was to reexamine the procedure for the source of error, and correct it or develop an alternative method.
Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.
2004-01-01
Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.
NASA Astrophysics Data System (ADS)
Jos, Sujit; Kumar, Preetam; Chakrabarti, Saswat
Orthogonal and quasi-orthogonal codes are integral part of any DS-CDMA based cellular systems. Orthogonal codes are ideal for use in perfectly synchronous scenario like downlink cellular communication. Quasi-orthogonal codes are preferred over orthogonal codes in the uplink communication where perfect synchronization cannot be achieved. In this paper, we attempt to compare orthogonal and quasi-orthogonal codes in presence of timing synchronization error. This will give insight into the synchronization demands in DS-CDMA systems employing the two classes of sequences. The synchronization error considered is smaller than chip duration. Monte-Carlo simulations have been carried out to verify the analytical and numerical results.
Best estimate of luminal cross-sectional area of coronary arteries from angiograms
NASA Technical Reports Server (NTRS)
Lee, P. L.; Selzer, R. H.
1988-01-01
We have reexamined the problem of estimating the luminal area of an elliptically-shaped coronary artery cross section from two or more radiographic diameter measurements. The expected error is found to be much smaller than the maximum potential error. In the cae of two orthogonal views, closed form expressions have been derived for calculating the area and the uncertainty. Assuming that the underlying ellipse has limited ellipticity (major/minor axis ratio less than five), it is shown that the average uncertainty in the area is less than 14 percent. When more than two views are available, we suggest using a least-squares fit method to extract all available information from the data.
SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, S; Hong, C; Kim, M
Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D; Dyer, B; Kumaran Nair, C
Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less
Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.
Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M
2006-01-01
The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.
Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel
2016-10-01
Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.
Spectral combination of spherical gravitational curvature boundary-value problems
NASA Astrophysics Data System (ADS)
PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel
2018-04-01
Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.
(Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2016-02-01
Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
Species-area relationships and extinction forecasts.
Halley, John M; Sgardeli, Vasiliki; Monokrousos, Nikolaos
2013-05-01
The species-area relationship (SAR) predicts that smaller areas contain fewer species. This is the basis of the SAR method that has been used to forecast large numbers of species committed to extinction every year due to deforestation. The method has a number of issues that must be handled with care to avoid error. These include the functional form of the SAR, the choice of equation parameters, the sampling procedure used, extinction debt, and forest regeneration. Concerns about the accuracy of the SAR technique often cite errors not much larger than the natural scatter of the SAR itself. Such errors do not undermine the credibility of forecasts predicting large numbers of extinctions, although they may be a serious obstacle in other SAR applications. Very large errors can arise from misinterpretation of extinction debt, inappropriate functional form, and ignoring forest regeneration. Major challenges remain to understand better the relationship between sampling protocol and the functional form of SARs and the dynamics of relaxation, especially in continental areas, and to widen the testing of extinction forecasts. © 2013 New York Academy of Sciences.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Evaluation Of Statistical Models For Forecast Errors From The HBV-Model
NASA Astrophysics Data System (ADS)
Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.
2009-04-01
Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
Selective speciation and determination of inorganic arsenic in water, food and biological samples.
Tuzen, Mustafa; Saygi, Kadriye Ozlem; Karaman, Isa; Soylak, Mustafa
2010-01-01
A procedure for the speciation of arsenic(III) and arsenic(V) in natural water samples has been established in the presented work. Arsenic(III) ions were quantitatively recovered on Alternaria solani coated Diaion HP-2MG resin at pH 7, while the recoveries of arsenic(V) was below 10%. Arsenic(V) in the mixing solution containing As(III) and As(V) was reduced by using KI and L(+) ascorbic acid solution, then the procedure was applied to determination of total arsenic. Arsenic(V) was calculated as the difference between the total arsenic content and As(III) content. The determination of arsenic was performed by using hydride generation atomic absorption spectrometry. The influences of some alkali, earth alkali and transition metals on the biosorption of arsenic(III) were investigated. The preconcentration factor was 35. The detection limits for As(III) (N=20, k=3) was found as 11 ng L(-1). The relative standard deviation and relative error of the determinations were found to be lower than 7% and 4%, respectively. The accuracy of the method was confirmed with certified reference materials. The method was successively applied for the determination and speciation of inorganic arsenic in water, food and biological samples. Copyright 2009 Elsevier Ltd. All rights reserved.
Angular Diameters of Stars from the Mark III Optical Interferometer
2003-11-01
0.105 8775 .............. Peg 16.528 0.165 16.326 0.229 16.464 0.230 15.970 0.319 17.982 0.180 8796 .............. 56 Peg 2.190 0.048 2.031...1.000 1.000 1.000 8796 .............. 56 Peg 1.001 0.005 0.987 0.012 . . . . . . Notes.—If an entry for a star does not have an error estimate, that...3.92 M2.5 III 3639 47 8775 .............. Peg 2.42 1.50 4.63 15.22 M2.5 II 3448 42 8796 .............. 56 Peg 4.77 0.97 . . . 0.54 G8.0 I 4152
NASA Astrophysics Data System (ADS)
Zhang, Yi
2018-01-01
This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.
The observed clustering of damaging extratropical cyclones in Europe
NASA Astrophysics Data System (ADS)
Cusack, Stephen
2016-04-01
The clustering of severe European windstorms on annual timescales has substantial impacts on the (re-)insurance industry. Our knowledge of the risk is limited by large uncertainties in estimates of clustering from typical historical storm data sets covering the past few decades. Eight storm data sets are gathered for analysis in this study in order to reduce these uncertainties. Six of the data sets contain more than 100 years of severe storm information to reduce sampling errors, and observational errors are reduced by the diversity of information sources and analysis methods between storm data sets. All storm severity measures used in this study reflect damage, to suit (re-)insurance applications. The shortest storm data set of 42 years provides indications of stronger clustering with severity, particularly for regions off the main storm track in central Europe and France. However, clustering estimates have very large sampling and observational errors, exemplified by large changes in estimates in central Europe upon removal of one stormy season, 1989/1990. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm data sets show increased clustering between more severe storms from return periods (RPs) of 0.5 years to the longest measured RPs of about 20 years. Further, they contain signs of stronger clustering off the main storm track, and weaker clustering for smaller-sized areas, though these signals are more uncertain as they are drawn from smaller data samples. These new ultra-long storm data sets provide new information on clustering to improve our management of this risk.
Crawford, Charles G.
1985-01-01
The modified tracer technique was used to determine reaeration-rate coefficients in the Wabash River in reaches near Lafayette and Terre Haute, Indiana, at streamflows ranging from 2,310 to 7,400 cu ft/sec. Chemically pure (CP grade) ethylene was used as the tracer gas, and rhodamine-WT dye was used as the dispersion-dilution tracer. Reaeration coefficients determined for a 13.5-mi reach near Terre Haute, Indiana, at streamflows of 3,360 and 7,400 cu ft/sec (71% and 43% flow duration) were 1.4/day and 1.1/day at 20 C, respectively. Reaeration-rate coefficients determined for a 18.4-mile reach near Lafayette, Indiana, at streamflows of 2,310 and 3,420 cu ft/sec (70% and 53 % flow duration), were 1.2/day and 0.8/day at 20 C, respectively. None of the commonly used equations found in the literature predicted reaeration-rate coefficients similar to those measured for reaches of the Wabash River near Lafayette and Terre Haute. The average absolute prediction error for 10 commonly used reaeration equations ranged from 22% to 154%. Prediction error was much smaller in the reach near Terre Haute than in the reach near Lafayette. The overall average of the absolute prediction error for all 10 equations was 22% for the reach near Terre Haute and 128% for the reach near Lafayette. Confidence limits of results obtained from the modified tracer technique were smaller than those obtained from the equations in the literature.
Tsuchida, Satoshi; Thome, Kurtis
2017-01-01
Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329
Mehta, Saurabh P; Barker, Katherine; Bowman, Brett; Galloway, Heather; Oliashirazi, Nicole; Oliashirazi, Ali
2017-07-01
Much of the published works assessing the reliability of smartphone goniometer apps (SG) have poor generalizability since the reliability was assessed in healthy subjects. No research has established the values for standard error of measurement (SEM) or minimal detectable change (MDC) which have greater clinical utility to contextualize the range of motion (ROM) assessed using the SG. This research examined the test-retest reproducibility, concurrent validity, SEM, and MDC values for the iPhone goniometer app (i-Goni; June Software Inc., v.1.1, San Francisco, CA) in assessing knee ROM in patients with knee osteoarthritis or those after total knee replacement. A total of 60 participants underwent data collection which included the assessment of active knee ROM using the i-Goni and the universal goniometer (UG; EZ Read Jamar Goniometer, Patterson Medical, Warrenville, IL), knee muscle strength, and assessment of pain and lower extremity disability using quadruple numeric pain rating scale (Q-NPRS) and lower extremity functional scale (LEFS), respectively. Intraclass correlation coefficients (ICCs) were calculated to assess the reproducibility of the knee ROM assessed using the i-Goni and UG. Bland and Altman technique examined the agreement between these knee ROM. The SEM and MDC values were calculated for i-Goni assessed knee ROM to characterize the error in a single score and the index of true change, respectively. Pearson correlation coefficient examined concurrent relationships between the i-Goni and other measures. The ICC values for the knee flexion/extension ROM were superior for i-Goni (0.97/0.94) compared with the UG (0.95/0.87). The SEM values were smaller for i-Goni assessed knee flexion/extension (2.72/1.18 degrees) compared with UG assessed knee flexion/extension (3.41/1.62 degrees). Similarly, the MDC values were smaller for both these ROM for the i-Goni (6.3 and 2.72 degrees) suggesting smaller change required to infer true change in knee ROM. The i-Goni assessed knee ROM showed expected concurrent relationships with UG, knee muscle strength, Q-NPRS, and the LEFS. In conclusion, the i-Goni demonstrated superior reproducibility with smaller measurement error compared with UG in assessing knee ROM in the recruited cohort. Future research can expand the inquiry for assessing the reliability of the i-Goni to other joints. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Molinario, Giuseppe; Hansen, Matthew; Potapov, Peter V.
2017-08-01
An error in the unit conversion from pixels to hectares lead to all the areal quantities in the text being smaller than they should have been. Only the number of hectares were changed; none of the text nor tables were changed. The changes do not affect the overall results or conclusions.
Using High Spatial Resolution Digital Imagery
2005-02-01
digital base maps were high resolution U.S. Geological Survey (USGS) Digital Orthophoto Quarter Quadrangles (DOQQ). The Root Mean Square Errors (RMSE...next step was to assign real world coordinates to the linear im- age. The mosaics were geometrically registered to the panchromatic orthophotos ...useable thematic map from high-resolution imagery. A more practical approach may be to divide the Refuge into a set of smaller areas, or tiles
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y.
2013-01-01
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness. PMID:24009548
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Daeun; Woo, Jong-Hak; Bae, Hyun-Jin, E-mail: woo@astro.snu.ac.kr
Energetic ionized gas outflows driven by active galactic nuclei (AGNs) have been studied as a key phenomenon related to AGN feedback. To probe the kinematics of the gas in the narrow-line region, [O iii] λ 5007 has been utilized in a number of studies showing nonvirial kinematic properties due to AGN outflows. In this paper, we statistically investigate whether the H α emission line is influenced by AGN-driven outflows by measuring the kinematic properties based on the H α line profile and comparing them with those of [O iii]. Using the spatially integrated spectra of ∼37,000 Type 2 AGNs atmore » z < 0.3 selected from the Sloan Digital Sky Survey DR7, we find a nonlinear correlation between H α velocity dispersion and stellar velocity dispersion that reveals the presence of the nongravitational component, especially for AGNs with a wing component in H α . The large H α velocity dispersion and velocity shift of luminous AGNs are clear evidence of AGN outflow impacts on hydrogen gas, while relatively smaller kinematic properties compared to those of [O iii] imply that the observed outflow effect on the H α line is weaker than the case of [O iii].« less
Vieira, Luciana; Burt, Jennifer; Richardson, Peter W; Schloffer, Daniel; Fuchs, David; Moser, Alwin; Bartlett, Philip N; Reid, Gillian; Gollas, Bernhard
2017-06-01
The electrodeposition of tin, bismuth, and tin-bismuth alloys from Sn II and Bi III chlorometalate salts in the choline chloride/ethylene glycol (1:2 molar ratio) deep eutectic solvent was studied on glassy carbon and gold by cyclic voltammetry, rotating disc voltammetry, and chronoamperometry. The Sn II -containing electrolyte showed one voltammetric redox process corresponding to Sn II /Sn 0 . The diffusion coefficient of [SnCl 3 ] - , detected as the dominating species by Raman spectroscopy, was determined from Levich and Cottrell analyses. The Bi III -containing electrolyte showed two voltammetric reduction processes, both attributed to Bi III /Bi 0 . Dimensionless current/time transients revealed that the electrodeposition of both Sn and Bi on glassy carbon proceeded by 3D-progressive nucleation at a low overpotential and changed to instantaneous at higher overpotentials. The nucleation rate of Bi on glassy carbon was considerably smaller than that of Sn. Elemental Sn and Bi were electrodeposited on Au-coated glass slides from their respective salt solutions, as were Sn-Bi alloys from a 2:1 Sn II /Bi III solution. The biphasic Sn-Bi alloys changed from a Bi-rich composition to a Sn-rich composition by making the deposition potential more negative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filiz Ak, N.; Brandt, W. N.; Schneider, D. P.
2014-08-20
We consider how the profile and multi-year variability properties of a large sample of C IV Broad Absorption Line (BAL) troughs change when BALs from Si IV and/or Al III are present at corresponding velocities, indicating that the line of sight intercepts at least some lower ionization gas. We derive a number of observational results for C IV BALs separated according to the presence or absence of accompanying lower ionization transitions, including measurements of composite profile shapes, equivalent width (EW), characteristic velocities, composite variation profiles, and EW variability. We also measure the correlations between EW and fractional-EW variability for Cmore » IV, Si IV, and Al III. Our measurements reveal the basic correlated changes between ionization level, kinematics, and column density expected in accretion-disk wind models; e.g., lines of sight including lower ionization material generally show deeper and broader C IV troughs that have smaller minimum velocities and that are less variable. Many C IV BALs with no accompanying Si IV or Al III BALs may have only mild or no saturation.« less
A comparison of earthquake backprojection imaging methods for dense local arrays
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.
2018-03-01
Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.
NASA Astrophysics Data System (ADS)
Valdes, Gilmer; Solberg, Timothy D.; Heskel, Marina; Ungar, Lyle; Simone, Charles B., II
2016-08-01
To develop a patient-specific ‘big data’ clinical decision tool to predict pneumonitis in stage I non-small cell lung cancer (NSCLC) patients after stereotactic body radiation therapy (SBRT). 61 features were recorded for 201 consecutive patients with stage I NSCLC treated with SBRT, in whom 8 (4.0%) developed radiation pneumonitis. Pneumonitis thresholds were found for each feature individually using decision stumps. The performance of three different algorithms (Decision Trees, Random Forests, RUSBoost) was evaluated. Learning curves were developed and the training error analyzed and compared to the testing error in order to evaluate the factors needed to obtain a cross-validated error smaller than 0.1. These included the addition of new features, increasing the complexity of the algorithm and enlarging the sample size and number of events. In the univariate analysis, the most important feature selected was the diffusion capacity of the lung for carbon monoxide (DLCO adj%). On multivariate analysis, the three most important features selected were the dose to 15 cc of the heart, dose to 4 cc of the trachea or bronchus, and race. Higher accuracy could be achieved if the RUSBoost algorithm was used with regularization. To predict radiation pneumonitis within an error smaller than 10%, we estimate that a sample size of 800 patients is required. Clinically relevant thresholds that put patients at risk of developing radiation pneumonitis were determined in a cohort of 201 stage I NSCLC patients treated with SBRT. The consistency of these thresholds can provide radiation oncologists with an estimate of their reliability and may inform treatment planning and patient counseling. The accuracy of the classification is limited by the number of patients in the study and not by the features gathered or the complexity of the algorithm.
Peripheral refractive correction and automated perimetric profiles.
Wild, J M; Wood, J M; Crews, S J
1988-06-01
The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.
Kort, N P; van Raay, J J A M; Thomassen, B J W
2007-08-01
Use of an intramedullary rod is advised for the alignment of the femoral component of an Oxford phase-III prosthesis. There are users moving toward extramedullary alignment, which is merely an indicator of frustration with accuracy of intramedullary alignment. The results of our study with 10 cadaver femora demonstrate that use of a short and long intramedullary femoral rod may result in excessive flexion alignment error of the femoral component. Understanding of the extramedullary alignment possibility and experience with the visual alignment of the femoral drill guide is essential toward minimizing potential errors in the alignment of the femoral component.
Causes and Prevention of Laparoscopic Bile Duct Injuries
Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.
2003-01-01
Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139
Reducing wall plasma expansion with gold foam irradiated by laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lu; Ding, Yongkun, E-mail: ding-yk@vip.sina.com; Jiang, Shaoen, E-mail: jiangshn@vip.sina.com
The experimental study on the expanding plasma movement of low-density gold foam (∼1% solid density) irradiated by a high power laser is reported in this paper. Experiments were conducted using the SG-III prototype laser. Compared to solid gold with 19.3 g/cc density, the velocities of X-ray emission fronts moving off the wall are much smaller for gold foam with 0.3 g/cc density. Theoretical analysis and MULTI 1D simulation results also show less plasma blow-off, and that the density contour movement velocities of gold foam are smaller than those of solid gold, agreeing with experimental results. These results indicate that foam walls havemore » advantages in symmetry control and lowering plasma fill when used in ignition hohlraum.« less
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo
Kent, Paul R.; Krogel, Jaron T.
2017-06-22
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
A Molecular-Genetic Study of the Arabidopsis Toc75 Gene Family1
Baldwin, Amy; Wardle, Anthony; Patel, Ramesh; Dudley, Penny; Park, Soon Ki; Twell, David; Inoue, Kentaro; Jarvis, Paul
2005-01-01
Toc75 (translocon at the outer envelope membrane of chloroplasts, 75 kD) is the protein translocation channel at the outer envelope membrane of plastids and was first identified in pea (Pisum sativum) using biochemical approaches. The Arabidopsis (Arabidopsis thaliana) genome contains three Toc75-related sequences, termed atTOC75-I, atTOC75-III, and atTOC75-IV, which we studied using a range of molecular, genetic, and biochemical techniques. Expression of atTOC75-III is strongly regulated and at its highest level in young, rapidly expanding tissues. By contrast, atTOC75-IV is expressed uniformly throughout development and at a much lower level than atTOC75-III. The third sequence, atTOC75-I, is a pseudogene that is not expressed due to a gypsy/Ty3 transposon insertion in exon 1, and numerous nonsense, frame-shift, and splice-junction mutations. The expressed genes, atTOC75-III and atTOC75-IV, both encode integral envelope membrane proteins. Unlike atToc75-III, the smaller atToc75-IV protein is not processed upon targeting to the envelope, and its insertion does not require ATP at high concentrations. The atTOC75-III gene is essential for viability, since homozygous atToc75-III knockout mutants (termed toc75-III) could not be identified, and aborted seeds were observed at a frequency of approximately 25% in the siliques of self-pollinated toc75-III heterozygotes. Homozygous toc75-III embryos were found to abort at the two-cell stage. Homozygous atToc75-IV knockout plants (termed toc75-IV) displayed no obvious visible phenotypes. However, structural abnormalities were observed in the etioplasts of toc75-IV seedlings and atTOC75-IV overexpressing lines, and toc75-IV plants were less efficient at deetiolation than wild type. These results suggest some role for atToc75-IV during growth in the dark. PMID:15908591
Amin, Elizabeth A; Truhlar, Donald G
2008-01-01
We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0.2 D in dipole moments, and ∼4 kcal/mol in Zn-ligand bond energies) that cannot be neglected for accurate modeling, but the same density functionals that do well in all-electron nonrelativistic calculations do well with relativistic effective core potentials. Although most tests are carried out with augmented polarized triple-ζ basis sets, we also carried out some tests with an augmented polarized double-ζ basis set, and we found, on average, that with the smaller basis set DFT has no loss in accuracy for dipole moments and only ∼10% less accurate bond lengths.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-10
...: Our proposed action (75 FR 27976) and the associated TSD (pages 2-3) both refer to two ICAPCD analyses... specifically references page 15 of Environ's ``Draft Final Technical Memorandum Regulation VIII BACM Analysis... reference CARB's inventory analysis to support the 50% reduction assumption. \\1\\ Printed in error as III-2...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-05
... Permit the Nullification of Trades Involving Catastrophic Errors July 30, 2013 Pursuant to Section 19(b.... Specifically, BX proposes to amend Section 6(f)(iii) to permit the nullification of trades involving... of the proposed adjusted price to accept it or else the trade will be nullified: Minimum Theoretical...
Code of Federal Regulations, 2010 CFR
2010-04-01
... or refunds-for wages paid prior to 1987. 404.1283 Section 404.1283 Employees' Benefits SOCIAL... known error which require additional time to complete; or (iii) The Social Security Administration is... additional time is needed to make a determination; or (iv) The Social Security Administration has not issued...
Code of Federal Regulations, 2011 CFR
2011-04-01
... or refunds-for wages paid prior to 1987. 404.1283 Section 404.1283 Employees' Benefits SOCIAL... known error which require additional time to complete; or (iii) The Social Security Administration is... additional time is needed to make a determination; or (iv) The Social Security Administration has not issued...
Reaction to Reexamination: More on Type III Error in Program Evaluation.
ERIC Educational Resources Information Center
Cook, Thomas J.; Dobson, L. Douglas
1982-01-01
The authors respond to criticism and analyses by Eva Rezmovic (TM 507 851) of their experimental design, particularly the suggested use of information on the amount of program service received by the control group comparable to the treatment group in their study of the relationship between program implementation levels and program outcomes. (CM)
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
Interferometer for Measuring Displacement to Within 20 pm
NASA Technical Reports Server (NTRS)
Zhao, Feng
2003-01-01
An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?
Kiernan, D; Hosking, J; O'Brien, T
2016-03-01
Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
Online Modulation of Selective Attention is not Impaired in Healthy Aging.
Sekuler, Robert; Huang, Jie; Sekuler, Allison B; Bennett, Patrick J
2017-01-01
Background/Study Context: Reduced processing speed pervades a great many aspects of human aging and cognition. However, little is known about one aspect of cognitive aging in which speed is of the essence, namely, the speed with which older adults can deploy attention in response to a cue. The authors compared rapid temporal modulation of cued visual attention in younger (M age = 22.3 years) and older (M age = 68.9 years) adults. On each trial of a short-term memory task, a cue identified which of two briefly presented stimuli was task relevant and which one should be ignored. After a short delay, subjects demonstrated recall by reproducing from memory the task-relevant stimulus. This produced estimates of (i) accuracy with which the task-relevant stimulus was recalled, (ii) the influence of stimuli encountered on previous trials (a prototype effect), and (iii) the influence of the trial's task-irrelevant stimulus. For both groups, errors in recall were considerably smaller when selective attention was cued before rather than after presentation of the stimuli. Both groups showed serial position effects to the same degree, and both seemed equally adept at exploiting the stimuli encountered on previous trials as a means of supplementing recall accuracy on the current trial. Younger and older subjects may not differ reliably in capacity for cue-directed temporal modulation of selective attention, or in ability to draw on previously seen stimuli as memory support.
New determination of the gravitational constant G with time-of-swing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tu Liangcheng; Li Qing; Wang Qinglan
A new determination of the Newtonian gravitational constant G is presented by using a torsion pendulum with the time-of-swing method. Compared with our previous measurement with the same method, several improvements greatly reduced the uncertainties as follows: (i) two stainless steel spheres with more homogeneous density are used as the source masses instead of the cylinders used in the previous experiment, and the offset of the mass center from the geometric center is measured and found to be much smaller than that of the cylinders; (ii) a rectangular glass block is used as the main body of the pendulum, whichmore » has fewer vibration modes and hence improves the stability of the period and reduces the uncertainty of the moment of inertia; (iii) both the pendulum and source masses are placed in the same vacuum chamber to reduce the error of measuring the relative positions; (iv) changing the configurations between the ''near'' and ''far'' positions is remotely operated by using a stepper motor to lower the environmental disturbances; and (v) the anelastic effect of the torsion fiber is first measured directly by using two disk pendulums with the help of a high-Q quartz fiber. We have performed two independent G measurements, and the two G values differ by only 9 ppm. The combined value of G is (6.673 49{+-}0.000 18)x10{sup -11} m{sup 3} kg{sup -1} s{sup -2} with a relative uncertainty of 26 ppm.« less
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
2008-11-20
techniques for generating THz radiation [5], none of them provides a THz source which is simultaneously ( i ) compact, (ii) highly efficient, (iii...are very attractive for QPM THz-wave generation because of several appealing properties, namely ( i ) small THz absorption coefficient (smaller by an...with periodically- inverted crystalline orientation were used for QPM THz generation: ( i ) diffusion-bonded GaAs (DB-GaAs) [49], produced by
Evaluation of DCS III Transmission Alternatives, Phase II, Task 1.
1981-08-31
Agency Defense Communications Engineering Center Reston, Virginia 22090 Contract No. DCA 100-79-C--0044 ,D ii 2 01982 ONG SPACI IAE K ROONOO IIACH...Transmission Media Alternatives Task 2. Development of Evolving DCS Transmission System Al ternatives Task 3. Identification of Technology and Regulatory...For existing tree growth, add 15 m. For smaller vegetation, add 3 m. 11. Determine the antenna tower heights to insure line-of-sight clearance above the
Theory of Anion-Substituted Nitrogen-Bearing III-V Alloys
1998-07-20
was found by Zunger group). When more than 4% arsenic is incorporated into GaN in an ordered array, the band gap closes . Calculations of the...arsenic is incorporated into GaN in an ordered array, the band gap closes . Calculations of the properties of random alloys predict smaller bowing...BEARING lll-V ALLOYS Prepared by: M. A. Berding, Senior Research Physicist M. van Schilfgaarde, Senior Research Physicist A. Sher, Associate Director
Distribution of standing-wave errors in real-ear sound-level measurements.
Richmond, Susan A; Kopun, Judy G; Neely, Stephen T; Tan, Hongyang; Gorga, Michael P
2011-05-01
Standing waves can cause measurement errors when sound-pressure level (SPL) measurements are performed in a closed ear canal, e.g., during probe-microphone system calibration for distortion-product otoacoustic emission (DPOAE) testing. Alternative calibration methods, such as forward-pressure level (FPL), minimize the influence of standing waves by calculating the forward-going sound waves separate from the reflections that cause errors. Previous research compared test performance (Burke et al., 2010) and threshold prediction (Rogers et al., 2010) using SPL and multiple FPL calibration conditions, and surprisingly found no significant improvements when using FPL relative to SPL, except at 8 kHz. The present study examined the calibration data collected by Burke et al. and Rogers et al. from 155 human subjects in order to describe the frequency location and magnitude of standing-wave pressure minima to see if these errors might explain trends in test performance. Results indicate that while individual results varied widely, pressure variability was larger around 4 kHz and smaller at 8 kHz, consistent with the dimensions of the adult ear canal. The present data suggest that standing-wave errors are not responsible for the historically poor (8 kHz) or good (4 kHz) performance of DPOAE measures at specific test frequencies.
NASA Astrophysics Data System (ADS)
Hwang, Yuh-Jing; Chiong, Chau-Ching; Huang, Yau-De; Huang, Chi-Den; Liu, Ching-Tang; Kuo, Yue-Fang; Weng, Shou-Hsien; Ho, Chin-Ting; Chiang, Po-Han; Wu, Hsiao-Ling; Chang, Chih-Cheng; Jian, Shou-Ting; Lee, Chien-Feng; Lee, Yi-Wei; Pospieszalski, Marian; Henke, Doug; Finger, Ricardo; Tapia, Valeria; Gonzalez, Alvaro
2016-07-01
The ALMA Band-1 receiver front-end prototype cold and warm cartridge assemblies, including the system and key components for ALMA Band-1 receivers have been developed and two sets of prototype cartridge were fully tested. The measured aperture efficiency for the cold receiver is above the 80% specification except for a few frequency points. Based on the cryogenically cooled broadband low-noise amplifiers provided by NRAO, the receiver noise temperature can be as low as 15 - 32K for pol-0 and 17 - 30K for pol-1. Other key testing items are also measured. The receiver beam pattern is measured, the results is well fit to the simulation and design. The pointing error extracted from the measured beam pattern indicates the error is 0.1 degree along azimuth and 0.15 degree along elevation, which is well fit to the specification (smaller than 0.4 degree). The equivalent hot load temperature for 5% gain compression is 492 - 4583K, which well fit to the specification of 5% with 373K input thermal load. The image band suppression is higher than 30 dB typically and the worst case is higher than 20 dB for 34GHz RF signal and 38GHz LO signal, which is all higher than 7 dB required specification. The cross talk between orthogonal polarization is smaller than -85 dB based on present prototype LO. The amplitude stability is below 2.0 x 10-7 , which is fit to the specification of 4.0 x 10-7 for timescales in the range of 0.05 s ≤ T ≤ 100 s. The signal path phase stability measured is smaller than 5 fs, which is smaller than 22 fs for Long term (delay drift) 20 s ≤ T < 300 sec. The IF output phase variation is smaller than 3.5° rms typically, and the specification is less than 4.5° rms. The measured IF output power level is -28 to -30.5 dBm with 300K input load. The measured IF output power flatness is less than 5.6 dB for 2GHz window, and 1.3dB for 31MHz window. The first batch of prototype cartridges will be installed on site for further commissioning on July of 2017.
Compact and high resolution virtual mouse using lens array and light sensor
NASA Astrophysics Data System (ADS)
Qin, Zong; Chang, Yu-Cheng; Su, Yu-Jie; Huang, Yi-Pai; Shieh, Han-Ping David
2016-06-01
Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
The Barnes-Evans color-surface brightness relation: A preliminary theoretical interpretation
NASA Technical Reports Server (NTRS)
Shipman, H. L.
1980-01-01
Model atmosphere calculations are used to assess whether an empirically derived relation between V-R and surface brightness is independent of a variety of stellar paramters, including surface gravity. This relationship is used in a variety of applications, including the determination of the distances of Cepheid variables using a method based on the Beade-Wesselink method. It is concluded that the use of a main sequence relation between V-R color and surface brightness in determining radii of giant stars is subject to systematic errors that are smaller than 10% in the determination of a radius or distance for temperature cooler than 12,000 K. The error in white dwarf radii determined from a main sequence color surface brightness relation is roughly 10%.
Calibrating photometric redshifts of luminous red galaxies
Padmanabhan, Nikhil; Budavari, Tamas; Schlegel, David J.; ...
2005-05-01
We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS–2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is σ~ 0.03 for redshifts less than 0.55 and worsens at higher redshift (~ 0.06more » for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.« less
Crystal structures of two cross-bridged chromium(III) tetraazamacrocycles
Prior, Timothy J.; Maples, Danny L.; Maples, Randall D.; Hoffert, Wesley A.; Parsell, Trenton H.; Silversides, Jon D.; Archibald, Stephen J.; Hubin, Timothy J.
2014-01-01
The crystal structure of dichlorido(4,10-dimethyl-1,4,7,10-tetraazabicyclo[5.5.2]tetradecane)chromium(III) hexafluoridophosphate, [CrCl2(C12H26N4)]PF6, (I), has monoclinic symmetry (space group P21/n) at 150 K. The structure of the related dichlorido(4,11-dimethyl-1,4,8,11-tetraazabicyclo[6.6.2]hexadecane)chromium(III) hexafluoridophosphate, [CrCl2(C14H30N4)]PF6, (II), also displays monoclinic symmetry (space group P21/c) at 150 K. In each case, the CrIII ion is hexacoordinate with two cis chloride ions and two non-adjacent N atoms bound cis equatorially and the other two non-adjacent N atoms bound trans axially in a cis-V conformation of the macrocycle. The extent of the distortion from the preferred octahedral coordination geometry of the CrIII ion is determined by the parent macrocycle ring size, with the larger cross-bridged cyclam ring in (II) better able to accommodate this preference and the smaller cross-bridged cyclen ring in (I) requiring more distortion away from octahedral geometry. PMID:25309165
Numerical estimation of the relative entropy of entanglement
NASA Astrophysics Data System (ADS)
Zinchenko, Yuriy; Friedland, Shmuel; Gour, Gilad
2010-11-01
We propose a practical algorithm for the calculation of the relative entropy of entanglement (REE), defined as the minimum relative entropy between a state and the set of states with positive partial transpose. Our algorithm is based on a practical semidefinite cutting plane approach. In low dimensions the implementation of the algorithm in matlab provides an estimation for the REE with an absolute error smaller than 10-3.
Correcting Surface Figure Error in Imaging Satellites Using a Deformable Mirror
2013-12-01
background understanding about the Naval Postgraduate School’s SMT test- bed and the required performance for mirror surface figures. The...Postgraduate School. Larger than the Hubble Space Telescope, but smaller than the JWST (see Figure 2), the SMT is an advanced test- bed to research the...orientation (from [3]). The six segments of the primary mirror have a lightweight, deformable, nano- laminate face with actuators across the rear
2007-10-01
5.3.1.1 Study of Surf Zone Environment........................................... 5-6 5.3.2 Research Needs: High Priority...Detection of Smaller Munitions Items Study of Surf Zone Environment Improve Navigation Error Analysis Develop Cooperative Cued Platforms...towbodies, AUVs, ROVs, HOVs, and divers. Surveys in high energy surf zones present unique difficulties. Finally, participants stressed that the survey
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Single-Session Attention Bias Modification and Error-Related Brain Activity
Nelson, Brady D.; Jackson, Felicia; Amir, Nader; Hajcak, Greg
2015-01-01
An attentional bias to threat has been implicated in the etiology and maintenance of anxiety disorders. Recently, attention bias modification (ABM) has been shown to reduce threat biases and decrease anxiety. However, it is unclear whether ABM modifies neural activity linked to anxiety and risk. The current study examined the relationship between ABM and the error-related negativity (ERN), a putative biomarker of risk for anxiety disorders, and the relationship between the ERN and ABM-based changes in attention to threat. Fifty-nine participants completed a single-session of ABM and a flanker task to elicit the ERN—in counterbalanced order (i.e., ABM-before vs. ABM-after the ERN was measured). Results indicated that the ERN was smaller (i.e., less negative) among individuals who completed ABM-before relative to those who completed ABM-after. Furthermore, greater attentional disengagement from negative stimuli during ABM was associated with a smaller ERN among ABM-before and ABM-after participants. The present study suggests a direct relationship between the malleability of negative attention bias and the ERN. Explanations are provided for how ABM may contribute to reductions in the ERN. Overall, the present study indicates that a single-session of ABM may be related to a decrease in neural activity linked to anxiety and risk. PMID:26063611
Ansell, Emily B; Rando, Kenneth; Tuit, Keri; Guarnaccia, Joseph; Sinha, Rajita
2012-07-01
Cumulative adversity and stress are associated with risk of psychiatric disorders. While basic science studies show repeated and chronic stress effects on prefrontal and limbic neurons, human studies examining cumulative stress and effects on brain morphology are rare. Thus, we assessed whether cumulative adversity is associated with differences in gray matter volume, particularly in regions regulating emotion, self-control, and top-down processing in a community sample. One hundred three healthy community participants, aged 18 to 48 and 68% male, completed interview assessment of cumulative adversity and a structural magnetic resonance imaging protocol. Whole-brain voxel-based-morphometry analysis was performed adjusting for age, gender, and total intracranial volume. Cumulative adversity was associated with smaller volume in medial prefrontal cortex (PFC), insular cortex, and subgenual anterior cingulate regions (familywise error corrected, p < .001). Recent stressful life events were associated with smaller volume in two clusters: the medial PFC and the right insula. Life trauma was associated with smaller volume in the medial PFC, anterior cingulate, and subgenual regions. The interaction of greater subjective chronic stress and greater cumulative life events was associated with smaller volume in the orbitofrontal cortex, insula, and anterior and subgenual cingulate regions. Current results demonstrate that increasing cumulative exposure to adverse life events is associated with smaller gray matter volume in key prefrontal and limbic regions involved in stress, emotion and reward regulation, and impulse control. These differences found in community participants may serve to mediate vulnerability to depression, addiction, and other stress-related psychopathology. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Ansell, Emily B.; Rando, Kenneth; Tuit, Keri; Guarnaccia, Joseph; Sinha, Rajita
2012-01-01
Background Cumulative adversity and stress are associated with risk of psychiatric disorders. While basic science studies show repeated and chronic stress effects on prefrontal and limbic neurons, human studies examining cumulative stress and effects on brain morphology are rare. Thus, we assessed whether cumulative adversity is associated with differences in gray matter volume, particularly in regions regulating emotion, self-control, and top-down processing in a community sample. Methods One hundred three healthy community participants, aged 18 to 48 and 68% male, completed interview assessment of cumulative adversity and a structural magnetic resonance imaging protocol. Whole-brain voxel-based-morphometry analysis was performed adjusting for age, gender, and total intracranial volume. Results Cumulative adversity was associated with smaller volume in medial prefrontal cortex (PFC), insular cortex, and subgenual anterior cingulate regions (familywise error corrected, p <.001). Recent stressful life events were associated with smaller volume in two clusters: the medial PFC and the right insula. Life trauma was associated with smaller volume in the medial PFC, anterior cingulate, and subgenual regions. The interaction of greater subjective chronic stress and greater cumulative life events was associated with smaller volume in the orbitofrontal cortex, insula, and anterior and subgenual cingulate regions. Conclusions Current results demonstrate that increasing cumulative exposure to adverse life events is associated with smaller gray matter volume in key prefrontal and limbic regions involved in stress, emotion and reward regulation, and impulse control. These differences found in community participants may serve to mediate vulnerability to depression, addiction, and other stress-related psychopathology. PMID:22218286
Methods for determining soluble and insoluble Cr III and Cr VI compounds in welding fumes.
Matczak, W; Chmielnicka, J
1989-01-01
An analytical procedure for simultaneous determination of soluble and insoluble Cr III and Cr VI compounds in welding fumes has been proposed. In the welding fume samples collected on a membrane filter, total chromium was determined with atomic absorption spectrophotometry (AAS). Glass filters with collected samples were divided into two parts. In one part of the sample, soluble and insoluble chromium was determined by means of AAS. The separation of soluble chromium III and VI was carried out on diphenylcarbazide resin. In the second part of the sample total chromium VI was determined by means of the colorimetric method with s-diphenylcarbazide. The difference in the results of these determinations allowed the calculation of the content of total Cr III, Cr III insolub. and Cr VI insolub. The results of determining chromium compounds in welding fumes samples collected in the welder's breathing zone and in experimental chambers are also presented in this paper. The content of total chromium in the fumes determined by AAS (from a membrane filtr) and that calculated from the sum of soluble and insoluble chromium (from a glass filter) were concordant and within the limits of the admissible error for the method. Total chromium content in welding fume samples collected individually was found to range from 2.4-4.2%. The percentage of particular chromium compounds as compared to total chromium (100%) amounted: total Cr III--34%, total Cr VI--66%, soluble chromium--66% and in this Cr III--20% and Cr VI--43%, insoluble chromium--34% and in this: Cr III--14% and Cr VI--20%.
NASA Astrophysics Data System (ADS)
Scheel, Mark; Szilagyi, Bela; Blackman, Jonathan; Chu, Tony; Kidder, Lawrence; Pfeiffer, Harald; Buonanno, Alessandra; Pan, Yi; Taracchini, Andrea; SXS Collaboration
2015-04-01
We present the first numerical-relativity simulation of a compact-object binary whose gravitational waveform is long enough to cover the entire frequency band of advanced gravitational-wave detectors such as LIGO, Virgo and KAGRA, for mass ratio 7 and total mass as low as 45 . 5M⊙ . We find that effective-one-body models, either uncalibrated or calibrated against substantially shorter numerical-relativity waveforms at smaller mass ratios, reproduce our new waveform remarkably well, with a loss in detection rate due to modeling error smaller than 0 . 3 % . In contrast, post-Newtonian inspiral waveforms and existing phenomenological inspiral-merger-ringdown waveforms display much greater disagreement with our new simulation. The disagreement varies substantially depending on the specific post-Newtonian approximant used.
NASA Technical Reports Server (NTRS)
Shook, D. F.; Pierce, C. R.
1972-01-01
Proton recoil distributions were obtained by using organic liquid scintillators of different size. The measured distributions are converted to neutron spectra by differentiation analysis for comparison to the unfolded spectra of the largest scintillator. The approximations involved in the differentiation analysis are indicated to have small effects on the precision of neutron spectra measured with the smaller scintillators but introduce significant error for the largest scintillator. In the case of the smallest cylindrical scintillator, nominally 1.2 by 1.3 cm, the efficiency is shown to be insensitive to multiple scattering and to the angular distribution to the incident flux. These characteristics of the smaller scintillator make possible its use to measure scalar flux spectra within media high efficiency is not required.
Freire, Ricardo O; Rocha, Gerd B; Simas, Alfredo M
2005-05-02
Our previously defined Sparkle model (Inorg. Chem. 2004, 43, 2346) has been reparameterized for Eu(III) as well as newly parameterized for Gd(III) and Tb(III). The parameterizations have been carried out in a much more extensive manner, aimed at producing a new, more accurate model called Sparkle/AM1, mainly for the vast majority of all Eu(III), Gd(III), and Tb(III) complexes, which possess oxygen or nitrogen as coordinating atoms. All such complexes, which comprise 80% of all geometries present in the Cambridge Structural Database for each of the three ions, were classified into seven groups. These were regarded as a "basis" of chemical ambiance around a lanthanide, which could span the various types of ligand environments the lanthanide ion could be subjected to in any arbitrary complex where the lanthanide ion is coordinated to nitrogen or oxygen atoms. From these seven groups, 15 complexes were selected, which were defined as the parameterization set and then were used with a numerical multidimensional nonlinear optimization to find the best parameter set for reproducing chemical properties. The new parameterizations yielded an unsigned mean error for all interatomic distances between the Eu(III) ion and the ligand atoms of the first sphere of coordination (for the 96 complexes considered in the present paper) of 0.09 A, an improvement over the value of 0.28 A for the previous model and the value of 0.68 A for the first model (Chem. Phys. Lett. 1994, 227, 349). Similar accuracies have been achieved for Gd(III) (0.07 A, 70 complexes) and Tb(III) (0.07 A, 42 complexes). Qualitative improvements have been obtained as well; nitrates now coordinate correctly as bidentate ligands. The results, therefore, indicate that Eu(III), Gd(III), and Tb(III) Sparkle/AM1 calculations possess geometry prediction accuracies for lanthanide complexes with oxygen or nitrogen atoms in the coordination polyhedron that are competitive with present day ab initio/effective core potential calculations, while being hundreds of times faster.
NASA Astrophysics Data System (ADS)
Yang, Jing; Reichert, Peter; Abbaspour, Karim C.; Yang, Hong
2007-07-01
SummaryCalibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box-Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)-(iv) mentioned above.
Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2017-03-01
Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.
Morrison, Maeve; Cope, Vicki; Murray, Melanie
2018-05-15
Medication errors remain a commonly reported clinical incident in health care as highlighted by the World Health Organization's focus to reduce medication-related harm. This retrospective quantitative analysis examined medication errors reported by staff using an electronic Clinical Incident Management System (CIMS) during a 3-year period from April 2014 to April 2017 at a metropolitan mental health ward in Western Australia. The aim of the project was to identify types of medication errors and the context in which they occur and to consider recourse so that medication errors can be reduced. Data were retrieved from the Clinical Incident Management System database and concerned medication incidents from categorized tiers within the system. Areas requiring improvement were identified, and the quality of the documented data captured in the database was reviewed for themes pertaining to medication errors. Content analysis provided insight into the following issues: (i) frequency of problem, (ii) when the problem was detected, and (iii) characteristics of the error (classification of drug/s, where the error occurred, what time the error occurred, what day of the week it occurred, and patient outcome). Data were compared to the state-wide results published in the Your Safety in Our Hands (2016) report. Results indicated several areas upon which quality improvement activities could be focused. These include the following: structural changes; changes to policy and practice; changes to individual responsibilities; improving workplace culture to counteract underreporting of medication errors; and improvement in safety and quality administration of medications within a mental health setting. © 2018 Australian College of Mental Health Nurses Inc.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
Peng, Zhouhua; Wang, Dan; Wang, Wei; Liu, Lu
2015-11-01
This paper investigates the containment control problem of networked autonomous underwater vehicles in the presence of model uncertainty and unknown ocean disturbances. A predictor-based neural dynamic surface control design method is presented to develop the distributed adaptive containment controllers, under which the trajectories of follower vehicles nearly converge to the dynamic convex hull spanned by multiple reference trajectories over a directed network. Prediction errors, rather than tracking errors, are used to update the neural adaptation laws, which are independent of the tracking error dynamics, resulting in two time-scales to govern the entire system. The stability property of the closed-loop network is established via Lyapunov analysis, and transient property is quantified in terms of L2 norms of the derivatives of neural weights, which are shown to be smaller than the classical neural dynamic surface control approach. Comparative studies are given to show the substantial improvements of the proposed new method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Xue, Jiao-Mei; Lin, Ping-Zhen; Sun, Ji-Wei; Cao, Feng-Lin
2017-12-01
Here, we explored the functional and neural mechanisms underlying aggression related to adverse childhood experiences. We assessed behavioral performance and event-related potentials during a go/no-go and N-back paradigm. The participants were 15 individuals with adverse childhood experiences and high aggression (ACE + HA), 13 individuals with high aggression (HA), and 14 individuals with low aggression and no adverse childhood experiences (control group). The P2 latency (initial perceptual processing) was longer in the ACE + HA group for the go trials. The HA group had a larger N2 (response inhibition) than controls for the no-go trials. Error-related negativity (error processing) in the ACE + HA and HA groups was smaller than that of controls for false alarm go trials. Lastly, the ACE + HA group had shorter error-related negativity latencies than controls for false alarm trials. Overall, our results reveal the neural correlates of executive function in aggressive individuals with ACEs.
Corsica: A Multi-Mission Absolute Calibration Site
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.
2013-09-01
In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.
Analysis and compensation of synchronous measurement error for multi-channel laser interferometer
NASA Astrophysics Data System (ADS)
Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong
2017-05-01
Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s-1, the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter.
The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less
NASA Technical Reports Server (NTRS)
Deloach, R.
1981-01-01
The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.
Target motion tracking in MRI-guided transrectal robotic prostate biopsy.
Tadayyon, Hadi; Lasso, Andras; Kaushal, Aradhana; Guion, Peter; Fichtinger, Gabor
2011-11-01
MRI-guided prostate needle biopsy requires compensation for organ motion between target planning and needle placement. Two questions are studied and answered in this paper: 1) is rigid registration sufficient in tracking the targets with an error smaller than the clinically significant size of prostate cancer and 2) what is the effect of the number of intraoperative slices on registration accuracy and speed? we propose multislice-to-volume registration algorithms for tracking the biopsy targets within the prostate. Three orthogonal plus additional transverse intraoperative slices are acquired in the approximate center of the prostate and registered with a high-resolution target planning volume. Both rigid and deformable scenarios were implemented. Both simulated and clinical MRI-guided robotic prostate biopsy data were used to assess tracking accuracy. average registration errors in clinical patient data were 2.6 mm for the rigid algorithm and 2.1 mm for the deformable algorithm. rigid tracking appears to be promising. Three tracking slices yield significantly high registration speed with an affordable error.
Productivity: Effects of Information Feedback on Human Errors.
1980-07-31
even with 8, 9, and 10 digits. Individual differ- ences were considerable, with a biochemistry technician and a super- market checkout clerk doing...two part ic i pant -, took part Iii t I N t uN , in d a t h Ird part I I ilt I il was planned. [or cachi part ic ilpant , there Where- nline one
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less
NASA Astrophysics Data System (ADS)
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-01
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X =N ,P ,As ,Sb , and II-VI compounds, (Zn or Cd)X , with X =O ,S ,Se ,Te . By correcting (1) the binary band gaps at high-symmetry points Γ , L , X , (2) the separation of p -and d -orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.
Brain potentials measured during a Go/NoGo task predict completion of substance abuse treatment.
Steele, Vaughn R; Fink, Brandi C; Maurer, J Michael; Arbabshirani, Mohammad R; Wilber, Charles H; Jaffe, Adam J; Sidz, Anna; Pearlson, Godfrey D; Calhoun, Vince D; Clark, Vincent P; Kiehl, Kent A
2014-07-01
U.S. nationwide estimates indicate that 50% to 80% of prisoners have a history of substance abuse or dependence. Tailoring substance abuse treatment to specific needs of incarcerated individuals could improve effectiveness of treating substance dependence and preventing drug abuse relapse. We tested whether pretreatment neural measures of a response inhibition (Go/NoGo) task would predict which individuals would or would not complete a 12-week cognitive behavioral substance abuse treatment program. Adult incarcerated participants (n = 89; women n = 55) who volunteered for substance abuse treatment performed a Go/NoGo task while event-related potentials (ERPs) were recorded. Stimulus- and response-locked ERPs were compared between participants who completed (n = 68; women = 45) and discontinued (n = 21; women = 10) treatment. As predicted, stimulus-locked P2, response-locked error-related negativity (ERN/Ne), and response-locked error positivity (Pe), measured with windowed time-domain and principal component analysis, differed between groups. Using logistic regression and support-vector machine (i.e., pattern classifiers) models, P2 and Pe predicted treatment completion above and beyond other measures (i.e., N2, P300, ERN/Ne, age, sex, IQ, impulsivity, depression, anxiety, motivation for change, and years of drug abuse). Participants who discontinued treatment exhibited deficiencies in sensory gating, as indexed by smaller P2; error-monitoring, as indexed by smaller ERN/Ne; and adjusting response strategy posterror, as indexed by larger Pe. The combination of P2 and Pe reliably predicted 83.33% of individuals who discontinued treatment. These results may help in the development of individualized therapies, which could lead to more favorable, long-term outcomes. © 2013 Society of Biological Psychiatry Published by Society of Biological Psychiatry All rights reserved.
Hennig, Cheryl; Cooper, David
2011-08-01
Histomorphometric aging methods report varying degrees of precision, measured through Standard Error of the Estimate (SEE). These techniques have been developed from variable samples sizes (n) and the impact of n on reported aging precision has not been rigorously examined in the anthropological literature. This brief communication explores the relation between n and SEE through a review of the literature (abstracts, articles, book chapters, theses, and dissertations), predictions based upon sampling theory and a simulation. Published SEE values for age prediction, derived from 40 studies, range from 1.51 to 16.48 years (mean 8.63; sd: 3.81 years). In general, these values are widely distributed for smaller samples and the distribution narrows as n increases--a pattern expected from sampling theory. For the two studies that have samples in excess of 200 individuals, the SEE values are very similar (10.08 and 11.10 years) with a mean of 10.59 years. Assuming this mean value is a 'true' characterization of the error at the population level, the 95% confidence intervals for SEE values from samples of 10, 50, and 150 individuals are on the order of ± 4.2, 1.7, and 1.0 years, respectively. While numerous sources of variation potentially affect the precision of different methods, the impact of sample size cannot be overlooked. The uncertainty associated with SEE values derived from smaller samples complicates the comparison of approaches based upon different methodology and/or skeletal elements. Meaningful comparisons require larger samples than have frequently been used and should ideally be based upon standardized samples. Copyright © 2011 Wiley-Liss, Inc.
The observed clustering of damaging extra-tropical cyclones in Europe
NASA Astrophysics Data System (ADS)
Cusack, S.
2015-12-01
The clustering of severe European windstorms on annual timescales has substantial impacts on the re/insurance industry. Management of the risk is impaired by large uncertainties in estimates of clustering from historical storm datasets typically covering the past few decades. The uncertainties are unusually large because clustering depends on the variance of storm counts. Eight storm datasets are gathered for analysis in this study in order to reduce these uncertainties. Six of the datasets contain more than 100~years of severe storm information to reduce sampling errors, and the diversity of information sources and analysis methods between datasets sample observational errors. All storm severity measures used in this study reflect damage, to suit re/insurance applications. It is found that the shortest storm dataset of 42 years in length provides estimates of clustering with very large sampling and observational errors. The dataset does provide some useful information: indications of stronger clustering for more severe storms, particularly for southern countries off the main storm track. However, substantially different results are produced by removal of one stormy season, 1989/1990, which illustrates the large uncertainties from a 42-year dataset. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm datasets show a greater degree of clustering with increasing storm severity and suggest clustering of severe storms is much more material than weaker storms. Further, they contain signs of stronger clustering in areas off the main storm track, and weaker clustering for smaller-sized areas, though these signals are smaller than uncertainties in actual values. Both the improvement of existing storm records and development of new historical storm datasets would help to improve management of this risk.
A novel method for 3D measurement of RFID multi-tag network based on matching vision and wavelet
NASA Astrophysics Data System (ADS)
Zhuang, Xiao; Yu, Xiaolei; Zhao, Zhimin; Wang, Donghua; Zhang, Wenjie; Liu, Zhenlu; Lu, Dongsheng; Dong, Dingbang
2018-07-01
In the field of radio frequency identification (RFID), the three-dimensional (3D) distribution of RFID multi-tag networks has a significant impact on their reading performance. At the same time, in order to realize the anti-collision of RFID multi-tag networks in practical engineering applications, the 3D distribution of RFID multi-tag networks must be measured. In this paper, a novel method for the 3D measurement of RFID multi-tag networks is proposed. A dual-CCD system (vertical and horizontal cameras) is used to obtain images of RFID multi-tag networks from different angles. Then, the wavelet threshold denoising method is used to remove noise in the obtained images. The template matching method is used to determine the two-dimensional coordinates and vertical coordinate of each tag. The 3D coordinates of each tag are obtained subsequently. Finally, a model of the nonlinear relation between the 3D coordinate distribution of the RFID multi-tag network and the corresponding reading distance is established using the wavelet neural network. The experiment results show that the average prediction relative error is 0.71% and the time cost is 2.17 s. The values of the average prediction relative error and time cost are smaller than those of the particle swarm optimization neural network and genetic algorithm–back propagation neural network. The time cost of the wavelet neural network is about 1% of that of the other two methods. The method proposed in this paper has a smaller relative error. The proposed method can improve the real-time performance of RFID multi-tag networks and the overall dynamic performance of multi-tag networks.
Gravity and isostatic anomaly maps of Greece produced
NASA Astrophysics Data System (ADS)
Lagios, E.; Chailas, S.; Hipkin, R. G.
A gravity anomaly map of Greece was first compiled in the early 1970s [Makris and Stavrou, 1984] from all available gravity data collected by different Hellenic institutions. However, to compose this map the data had to be smoothed to the point that many of the smaller-wavelength gravity anomalies were lost. New work begun in 1987 has resulted in the publication of an updated map [Lagios et al., 1994] and an isostatic anomaly map derived from it.The gravity data cover the area between east longitudes 19° and 27° and north latitudes 32° and 42°, organized in files of 100-km squares and grouped in 10-km squares using UTM zone 34 coordinates. Most of the data on land come from the gravity observations of Makris and Stavrou [1984] with additional data from the Institute of Geology and Mining Exploration, the Public Oil Corporation of Greece, and Athens University. These data were checked using techniques similar to those used in compiling the gravity anomaly map of the United States, but the horizontal gradient was used as a check rather than the gravity difference. Marine data were digitized from the maps of Morelli et al. [1975a, 1975b]. All gravity anomaly values are referred to the IGSN-71 system, reduced with the standard Bouger density of 2.67 Mg/m3. We estimate the errors of the anomalies in the continental part of Greece to be ±0.9 mGal; this is expected to be smaller over fairly flat regions. For stations whose height has been determined by leveling, the error is only ±0.3 mGal. For the marine areas, the errors are about ±5 mGal [Morelli, 1990].
Nichols, Jennifer A; Roach, Koren E; Fiorentino, Niccolo M; Anderson, Andrew E
2016-09-01
Evidence suggests that the tibiotalar and subtalar joints provide near six degree-of-freedom (DOF) motion. Yet, kinematic models frequently assume one DOF at each of these joints. In this study, we quantified the accuracy of kinematic models to predict joint angles at the tibiotalar and subtalar joints from skin-marker data. Models included 1 or 3 DOF at each joint. Ten asymptomatic subjects, screened for deformities, performed 1.0m/s treadmill walking and a balanced, single-leg heel-rise. Tibiotalar and subtalar joint angles calculated by inverse kinematics for the 1 and 3 DOF models were compared to those measured directly in vivo using dual-fluoroscopy. Results demonstrated that, for each activity, the average error in tibiotalar joint angles predicted by the 1 DOF model were significantly smaller than those predicted by the 3 DOF model for inversion/eversion and internal/external rotation. In contrast, neither model consistently demonstrated smaller errors when predicting subtalar joint angles. Additionally, neither model could accurately predict discrete angles for the tibiotalar and subtalar joints on a per-subject basis. Differences between model predictions and dual-fluoroscopy measurements were highly variable across subjects, with joint angle errors in at least one rotation direction surpassing 10° for 9 out of 10 subjects. Our results suggest that both the 1 and 3 DOF models can predict trends in tibiotalar joint angles on a limited basis. However, as currently implemented, neither model can predict discrete tibiotalar or subtalar joint angles for individual subjects. Inclusion of subject-specific attributes may improve the accuracy of these models. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Qixing, Chen; Qiyu, Luo
2013-03-01
At present, the architecture of a digital-to-analog converter (DAC) in essence is based on the weight current, and the average value of its D/A signal current increases in geometric series according to its digital signal bits increase, which is 2n-1 times of its least weight current. But for a dual weight resistance chain type DAC, by using the weight voltage manner to D/A conversion, the D/A signal current is fixed to chain current Icha; it is only 1/2n-1 order of magnitude of the average signal current value of the weight current type DAC. Its principle is: n pairs dual weight resistances form a resistance chain, which ensures the constancy of the chain current; if digital signals control the total weight resistance from the output point to the zero potential point, that could directly control the total weight voltage of the output point, so that the digital signals directly turn into a sum of the weight voltage signals; thus the following goals are realized: (1) the total current is less than 200 μA (2) the total power consumption is less than 2 mW; (3) an 18-bit conversion can be realized by adopting a multi-grade structure; (4) the chip area is one order of magnitude smaller than the subsection current-steering type DAC; (5) the error depends only on the error of the unit resistance, so it is smaller than the error of the subsection current-steering type DAC; (6) the conversion time is only one action time of switch on or off, so its speed is not lower than the present DAC.
NASA Astrophysics Data System (ADS)
Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Zheng, Yong-hui; Shan, Cong-miao
2013-08-01
Based on the cat-eye effect of optical system, free space optical communication based on cat-eye modulating retro-reflector can build communication link rapidly. Compared to classical free space optical communication system, system based on cat-eye modulating retro-reflector has great advantages such as building communication link more rapidly, a passive terminal is smaller, lighter and lower power consuming. The incident angle is an important factor of cat-eye effect, so it will affect the retro-reflecting communication link. In this paper, the principle and work flow of free space optical communication based on cat-eye modulating retro-reflector were introduced. Then, using the theory of geometric optics, the equivalent model of modulating retro-reflector with incidence angle was presented. The analytical solution of active area and retro-reflected light intensity of cat-eye modulating retro-reflector were given. Noise of PIN photodetector was analyzed, based on which, bit error rate of free space optical communication based on cat-eye modulating retro-reflector was presented. Finally, simulations were done to study the effect of incidence angle to the communication. The simulation results show that the incidence angle has little effect on active area and retro-reflected light intensity when the incidence beam is in the active field angle of cat-eye modulating retro-reflector. With certain system and condition, the communication link can rapidly be built when the incidence light beam is in the field angle, and the bit error rate increases greatly with link range. When link range is smaller than 35Km, the bit error rate is less than 10-16.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
"Bed Side" Human Milk Analysis in the Neonatal Intensive Care Unit: A Systematic Review.
Fusch, Gerhard; Kwan, Celia; Kotrri, Gynter; Fusch, Christoph
2017-03-01
Human milk analyzers can measure macronutrient content in native breast milk to tailor adequate supplementation with fortifiers. This article reviews all studies using milk analyzers, including (i) evaluation of devices, (ii) the impact of different conditions on the macronutrient analysis of human milk, and (iii) clinical trials to improve growth. Results lack consistency, potentially due to systematic errors in the validation of the device, or pre-analytical sample preparation errors like homogenization. It is crucial to introduce good laboratory and clinical practice when using these devices; otherwise a non-validated clinical usage can severely affect growth outcomes of infants. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
1990-06-01
resonant Buck converter 19 ABSTRACT (Continue on reverse if necessary and identify by block number) Space power supply manufacturers have tried to...increase power density and construct smaller, highly efficient power supplies by increasing switching frequency. Incorporation of a power MOSFET as a...Michael, Second Reader \\’-. ohn P. Powers , Chairman Department of Electrical Engineering iii ABSTRACT Space power supply manufacturers have tried to
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1983-01-01
Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.
NASA Astrophysics Data System (ADS)
Harmanec, Petr; Prša, Andrej
2011-08-01
The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius R⊙ and luminosity L⊙ by the nominal values that are by definition exact and expressed in SI units: and ; (2) computing stellar masses in terms of M⊙ by noting that the measurement error of the product GM⊙ is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values and ; and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.
Learning to Predict and Control the Physics of Our Movements
2017-01-01
When we hold an object in our hand, the mass of the object alters the physics of our arm, changing the relationship between motor commands that our brain sends to our arm muscles and the resulting motion of our hand. If the object is unfamiliar to us, our first movement will exhibit an error, producing a trajectory that is different from the one we had intended. This experience of error initiates learning in our brain, making it so that on the very next attempt our motor commands partially compensate for the unfamiliar physics, resulting in smaller errors. With further practice, the compensation becomes more complete, and our brain forms a model that predicts the physics of the object. This model is a motor memory that frees us from having to relearn the physics the next time that we encounter the object. The mechanism by which the brain transforms sensory prediction errors into corrective motor commands is the basis for how we learn the physics of objects with which we interact. The cerebellum and the motor cortex appear to be critical for our ability to learn physics, allowing us to use tools that extend our capabilities, making us masters of our environment. PMID:28202784
NASA Astrophysics Data System (ADS)
ur Rahman, Zia; Deen, K. M.; Cano, Lawrence; Haider, Waseem
2017-07-01
Corrosion resistance and biocompatibility of 316L stainless steel implants depend on the surface features and the nature of the passive film. The influence of electropolishing on the surface topography, surface free energy and surface chemistry was determined by atomic force microscopy, contact angle meter and X-ray photoelectron spectroscopy, respectively. The electropolishing of 316L stainless steel was conducted at the oxygen evolution potential (EPO) and below the oxygen evolution potential (EPBO). Compared to mechanically polished (MP) and EPO, the EPBO sample depicted lower surface roughness (Ra = 6.07 nm) and smaller surface free energy (44.21 mJ/m2). The relatively lower corrosion rate (0.484 mpy) and smaller passive current density (0.619 μA/cm2) as determined from cyclic polarization scans was found to be related with the presence of OH, Cr(III), Fe(0), Fe(II) and Fe(III) species at the surface. These species assured the existence of relatively uniform passive oxide film over EPBO surface. Moreover, the relatively large charge transfer (Rct) and passive film resistance (Rf) registered by EPBO sample from impedance spectroscopy analysis confirmed its better electrochemical performance. The in vitro response of these polished samples toward MC3T3 pre-osteoblast cell proliferation was determined to be directly related with their surface and electrochemical properties.
The accuracy of estimates of the overturning circulation from basin-wide mooring arrays
NASA Astrophysics Data System (ADS)
Sinha, B.; Smeed, D. A.; McCarthy, G.; Moat, B. I.; Josey, S. A.; Hirschi, J. J.-M.; Frajka-Williams, E.; Blaker, A. T.; Rayner, D.; Madec, G.
2018-01-01
Previous modeling and observational studies have established that it is possible to accurately monitor the Atlantic Meridional Overturning Circulation (AMOC) at 26.5°N using a coast-to-coast array of instrumented moorings supplemented by direct transport measurements in key boundary regions (the RAPID/MOCHA/WBTS Array). The main sources of observational and structural errors have been identified in a variety of individual studies. Here a unified framework for identifying and quantifying structural errors associated with the RAPID array-based AMOC estimates is established using a high-resolution (eddy resolving at low-mid latitudes, eddy permitting elsewhere) ocean general circulation model, which simulates the ocean state between 1978 and 2010. We define a virtual RAPID array in the model in close analogy to the real RAPID array and compare the AMOC estimate from the virtual array with the true model AMOC. The model analysis suggests that the RAPID method underestimates the mean AMOC by ∼1.5 Sv (1 Sv = 106 m3 s-1) at ∼900 m depth, however it captures the variability to high accuracy. We examine three major contributions to the streamfunction bias: (i) due to the assumption of a single fixed reference level for calculation of geostrophic transports, (ii) due to regions not sampled by the array and (iii) due to ageostrophic transport. A key element in (i) and (iii) is use of the model sea surface height to establish the true (or absolute) geostrophic transport. In the upper 2000 m, we find that the reference level bias is strongest and most variable in time, whereas the bias due to unsampled regions is largest below 3000 m. The ageostrophic transport is significant in the upper 1000 m but shows very little variability. The results establish, for the first time, the uncertainty of the AMOC estimate due to the combined structural errors in the measurement design and suggest ways in which the error could be reduced. Our work has applications to basin-wide circulation measurement arrays at other latitudes and in other basins as well as quantifying systematic errors in ocean model estimates of the AMOC at 26.5°N.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties
NASA Astrophysics Data System (ADS)
Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.
2016-06-01
The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs. PMID:28348530
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Error assessment in molecular dynamics trajectories using computed NMR chemical shifts.
Koes, David R; Vries, John K
2017-01-01
Accurate chemical shifts for the atoms in molecular mechanics (MD) trajectories can be obtained from quantum mechanical (QM) calculations that depend solely on the coordinates of the atoms in the localized regions surrounding atoms of interest. If these coordinates are correct and the sample size is adequate, the ensemble average of these chemical shifts should be equal to the chemical shifts obtained from NMR spectroscopy. If this is not the case, the coordinates must be incorrect. We have utilized this fact to quantify the errors associated with the backbone atoms in MD simulations of proteins. A library of regional conformers containing 169,499 members was constructed from 6 model proteins. The chemical shifts associated with the backbone atoms in each of these conformers was obtained from QM calculations using density functional theory at the B3LYP level with a 6-311+G(2d,p) basis set. Chemical shifts were assigned to each backbone atom in each MD simulation frame using a template matching approach. The ensemble average of these chemical shifts was compared to chemical shifts from NMR spectroscopy. A large systematic error was identified that affected the 1 H atoms of the peptide bonds involved in hydrogen bonding with water molecules or peptide backbone atoms. This error was highly sensitive to changes in electrostatic parameters. Smaller errors affecting the 13 C a and 15 N atoms were also detected. We believe these errors could be useful as metrics for comparing the force-fields and parameter sets used in MD simulation because they are directly tied to errors in atomic coordinates.
Impact of switching from Caucasian to Indian reference equations for spirometry interpretation.
Chhabra, S K; Madan, M
2018-03-01
In the absence of ethnically appropriate prediction equations, spirometry data in Indian subjects are often interpreted using equations for other ethnic populations. To evaluate the impact of switching from Caucasian (National Health and Nutrition Examination Survey III [NHANES III] and Global Lung Function Initiative [GLI]) equations to the recently published North Indian equations on spirometric interpretation, and to examine the suitability of GLI-Mixed equations for this population. Spirometry data on 12 323 North Indian patients were analysed using the North Indian equations as well as NHANES III, GLI-Caucasian and GLI-Mixed equations. Abnormalities and ventilatory patterns were categorised and agreement in interpretation was evaluated. The NHANES III and GLI-Caucasian equations and, to a lesser extent, the GLI-Mixed equations, predicted higher values and labelled more measurements as abnormal. In up to one third of the patients, these differed from Indian equations in the categorisation of ventilatory patterns, with more patients classified as having restrictive and mixed disease. The NHANES III and GLI-Caucasian equations substantially overdiagnose abnormalities and misclassify ventilatory patterns on spirometry in Indian patients. Such errors of interpretation, although less common with the GLI-Mixed equations, remain substantial and are clinically unacceptable. A switch to Indian equations will have a major impact on interpretation.
NASA Astrophysics Data System (ADS)
Siman, W.; Mawlawi, O. R.; Mikell, J. K.; Mourtada, F.; Kappadath, S. C.
2017-01-01
The aims of this study were to evaluate the effects of noise, motion blur, and motion compensation using quiescent-period gating (QPG) on the activity concentration (AC) distribution—quantified using the cumulative AC volume histogram (ACVH)—in count-limited studies such as 90Y-PET/CT. An International Electrotechnical Commission phantom filled with low 18F activity was used to simulate clinical 90Y-PET images. PET data were acquired using a GE-D690 when the phantom was static and subject to 1-4 cm periodic 1D motion. The static data were down-sampled into shorter durations to determine the effect of noise on ACVH. Motion-degraded PET data were sorted into multiple gates to assess the effect of motion and QPG on ACVH. Errors in ACVH at AC90 (minimum AC that covers 90% of the volume of interest (VOI)), AC80, and ACmean (average AC in the VOI) were characterized as a function of noise and amplitude before and after QPG. Scan-time reduction increased the apparent non-uniformity of sphere doses and the dispersion of ACVH. These effects were more pronounced in smaller spheres. Noise-related errors in ACVH at AC20 to AC70 were smaller (<15%) compared to the errors between AC80 to AC90 (>15%). The accuracy of ACmean was largely independent of the total count. Motion decreased the observed AC and skewed the ACVH toward lower values; the severity of this effect depended on motion amplitude and tumor diameter. The errors in AC20 to AC80 for the 17 mm sphere were -25% and -55% for motion amplitudes of 2 cm and 4 cm, respectively. With QPG, the errors in AC20 to AC80 of the 17 mm sphere were reduced to -15% for motion amplitudes <4 cm. For spheres with motion amplitude to diameter ratio >0.5, QPG was effective at reducing errors in ACVH despite increases in image non-uniformity due to increased noise. ACVH is believed to be more relevant than mean or maximum AC to calculate tumor control and normal tissue complication probability. However, caution needs to be exercised when using ACVH in post-therapy 90Y imaging because of its susceptibility to image degradation from both image noise and respiratory motion.
On Combining Thermal-Infrared and Radio-Occultation Data of Saturn's Atmosphere
NASA Technical Reports Server (NTRS)
Flasar, F. M.; Schinder, P. J.; Conrath, B. J.
2008-01-01
Radio-occultation and thermal-infrared measurements are complementary investigations for sounding planetary atmospheres. The vertical resolution afforded by radio occultations is typically approximately 1 km or better, whereas that from infrared sounding is often comparable to a scale height. On the other hand, an instrument like CIRS can easily generate global maps of temperature and composition, whereas occultation soundings are usually distributed more sparsely. The starting point for radio-occultation inversions is determining the residual Doppler-shifted frequency, that is the shift in frequency from what it would be in the absence of the atmosphere. Hence the positions and relative velocities of the spacecraft, target atmosphere, and DSN receiving station must be known to high accuracy. It is not surprising that the inversions can be susceptible to sources of systematic errors. Stratospheric temperature profiles on Titan retrieved from Cassini radio occultations were found to be very susceptible to errors in the reconstructed spacecraft velocities (approximately equal to 1 mm/s). Here the ability to adjust the spacecraft ephemeris so that the profiles matched those retrieved from CIRS limb sounding proved to be critical in mitigating this error. A similar procedure can be used for Saturn, although the sensitivity of its retrieved profiles to this type of error seems to be smaller. One issue that has appeared in inverting the Cassini occultations by Saturn is the uncertainty in its equatorial bulge, that is, the shape in its iso-density surfaces at low latitudes. Typically one approximates that surface as a geopotential surface by assuming a barotropic atmosphere. However, the recent controversy in the equatorial winds, i.e., whether they changed between the Voyager (1981) era and later (after 1996) epochs of Cassini and some Hubble observations, has made it difficult to know the exact shape of the surface, and it leads to uncertainties in the retrieved temperature profiles of one to a few kelvins. This propagates into errors in the retrieved helium abundance, which makes use of thermal-infrared spectra and synthetic spectra computed with retrieved radio-occultation temperature profiles. The highest abundances are retrieved with the faster Voyager-era winds, but even these abundances are somewhat smaller than those retrieved from the thermal-infrared data alone (albeit with larger formal errors). The helium abundance determination is most sensitive to temperatures in the upper troposphere. Further progress may include matching the radio-occultation profiles with those from CIRS limb sounding in the upper stratosphere.
Improving Estimates Of Phase Parameters When Amplitude Fluctuates
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.
1989-01-01
Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.
Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations
2015-06-01
using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central
A Sequential Perspective on Searching for Static Targets
2011-01-01
number of expected looks. One possible measure of performance is the amount of slack between the error tolerance and the observed er- ror rate...less than a and it dominates the two alternate procedures. However, the er- ror rates of the two alternate procedures are smaller then when there is...Heterogeneous Autonomous Agents, in: Int’l. Conference on Robotics and Automation, 2009, pp. 939- 945 . [17] K.P. Tognetti, An optimal strategy for whereabouts
Experimental Results of Multiple Scattering.
1980-07-01
error is seen to be less for targets with smaller IS()I/IS(O)Iratio like the softer particles made from expanded polystyrene and larger for harder...optical spectrum, we also notice marked differences from the P, Q plots of dylite ( expanded polystyrene ) particles in preceding sections. It was...spheres made of expanded polystyrene . As X is continuously varied for the display of if(e) , we notice a fairly symmetrical intensity profile about X = 8/2
Experimental Results of Multiple Scattering.
1981-11-01
fixed, the error is seen to be less for targets with smaller IJ(i)/S(O)Iratio like the softer particles made from expanded polystyrene and larger for...differences from the P, Q plots of dylite ( expanded polystyrene ) particles in preceding sections. It was rather difficult to prepare more than two identical...contacting identical spheres made of expanded polystyrene . As X is continuously varied for the display of il(e) , we notice a fairly symmetrical
NASA Astrophysics Data System (ADS)
Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian
2017-03-01
Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I + II + III (FI + II + III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (Rp2), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501 g/L, 0.465 g/L and 5.57 for TP, and 0.969, 0.530 g/L, 0.341 g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI + II + III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS.
Goldstein, Felicia C; Caveney, Angela F; Hertzberg, Vicki S; Silbergleit, Robert; Yeatts, Sharon D; Palesch, Yuko Y; Levin, Harvey S; Wright, David W
2017-01-01
A Phase III, double-blind, placebo-controlled trial (ProTECT III) found that administration of progesterone did not reduce mortality or improve functional outcome as measured by the Glasgow Outcome Scale Extended (GOSE) in subjects with moderate to severe traumatic brain injury. We conducted a secondary analysis of neuropsychological outcomes to evaluate whether progesterone is associated with improved recovery of cognitive and motor functioning. ProTECT III was conducted at 49 level I trauma centers in the United States. Adults with moderate to severe TBI were randomized to receive intravenous progesterone or placebo within 4 h of injury for a total of 4 days. At 6 months, subjects underwent evaluation of memory, attention, executive functioning, language, and fine motor coordination/dexterity. Chi-square analysis revealed no significant difference in the proportion of subjects (263/280 progesterone, 283/295 placebo) with Galveston Orientation and Amnesia Test scores ≥75. Analyses of covariance did not reveal significant treatment effects for memory (Buschke immediate recall, p = 0.53; delayed recall, p = 0.94), attention (Trails A speed, p = 0.81 and errors, p = 0.22; Digit Span Forward length, p = 0.66), executive functioning (Trails B speed, p = 0.97 and errors, p = 0.93; Digit Span Backward length, p = 0.60), language (timed phonemic fluency, p = 0.05), and fine motor coordination/dexterity (Grooved Pegboard dominant hand time, p = 0.75 and peg drops, p = 0.59; nondominant hand time, p = 0.74 and peg drops, p = 0.61). Pearson Product Moment Correlations demonstrated significant (p < 0.001) associations between better neuropsychological performance and higher GOSE scores. Similar to the ProTECT III trial's results of the primary outcome, the secondary outcomes do not provide evidence of a neuroprotective effect of progesterone.
Belguise-Valladier, P; Maki, H; Sekiguchi, M; Fuchs, R P
1994-02-11
In the present work, we have studied in vitro replication of N-2-acetylaminofluorene (AAF) or cis-diamminedichloroplatinum II (cis-DDP) single modified DNA templates. We used the holoenzyme (pol III HE) or the alpha subunit of DNA polymerase III, which is involved in SOS mutagenesis, and other DNA polymerases in order to compare enzymes having different biological roles and properties. Single-stranded oligonucleotides (63-mer) bearing a single AAF adduct at one of the different guanine residues of the NarI sequence (-G1G2CG3CC-) have been used in primer extension assays. Site-specifically platinated 5'd(ApG) or 5'd(GpG) oligonucleotides were constructed and similarly used in primer extension assays. In all cases, irrespective of both the chemical nature of the lesion (i.e. AAF or cis-DDP) and its local sequence context (i.e. the 3 different sites for AAF adducts within the NarI site) replication by pol III HE and pol I Klenow fragment (pol I Kf) stops one base prior to the adduct site. Removal of the 3'-->5' proofreading activity alone was not sufficient to trigger bypass of DNA lesions. Indeed, when proofreading activity of pol I is inactivated by a point mutation (pol I Kf (exo-)), the major replication product corresponds to the position opposite the adduct site showing that incorporation across from the AAF adduct is possible. These results suggest that a polymerase with proofreading activity is actually found to stop one nucleotide before the adduct not because it is unable to insert a nucleotide opposite the adduct but most likely because elongation past the adduct is strongly impaired, giving thus an increased time frame for the proofreading exonuclease to remove the base inserted across from the adduct. These results are discussed in terms of their implications for error-free and error-prone bypass in vivo.