Sample records for average unsigned error

  1. Explicitly Representing the Solvation Shell in Continuum Solvent Calculations

    PubMed Central

    Svendsen, Hallvard F.; Merz, Kenneth M.

    2009-01-01

    A method is presented to explicitly represent the first solvation shell in continuum solvation calculations. Initial solvation shell geometries were generated with classical molecular dynamics simulations. Clusters consisting of solute and 5 solvent molecules were fully relaxed in quantum mechanical calculations. The free energy of solvation of the solute was calculated from the free energy of formation of the cluster and the solvation free energy of the cluster calculated with continuum solvation models. The method has been implemented with two continuum solvation models, a Poisson-Boltzmann model and the IEF-PCM model. Calculations were carried out for a set of 60 ionic species. Implemented with the Poisson-Boltzmann model the method gave an unsigned average error of 2.1 kcal/mol and a RMSD of 2.6 kcal/mol for anions, for cations the unsigned average error was 2.8 kcal/mol and the RMSD 3.9 kcal/mol. Similar results were obtained with the IEF-PCM model. PMID:19425558

  2. Analytic Energy Gradients for Variational Two-Electron Reduced-Density-Matrix-Driven Complete Active Space Self-Consistent Field Theory.

    PubMed

    Maradzike, Elvis; Gidofalvi, Gergely; Turney, Justin M; Schaefer, Henry F; DePrince, A Eugene

    2017-09-12

    Analytic energy gradients are presented for a variational two-electron reduced-density-matrix (2-RDM)-driven complete active space self-consistent field (CASSCF) method. The active-space 2-RDM is determined using a semidefinite programing (SDP) algorithm built upon an augmented Lagrangian formalism. Expressions for analytic gradients are simplified by the fact that the Lagrangian is stationary with respect to variations in both the primal and the dual solutions to the SDP problem. Orbital response contributions to the gradient are identical to those that arise in conventional CASSCF methods in which the electronic structure of the active space is described by a full configuration interaction (CI) wave function. We explore the relative performance of variational 2-RDM (v2RDM)- and CI-driven CASSCF for the equilibrium geometries of 20 small molecules. When enforcing two-particle N-representability conditions, full-valence v2RDM-CASSCF-optimized bond lengths display a mean unsigned error of 0.0060 Å and a maximum unsigned error of 0.0265 Å, relative to those obtained from full-valence CI-CASSCF. When enforcing partial three-particle N-representability conditions, the mean and maximum unsigned errors are reduced to only 0.0006 and 0.0054 Å, respectively. For these same molecules, full-valence v2RDM-CASSCF bond lengths computed in the cc-pVQZ basis set deviate from experimentally determined ones on average by 0.017 and 0.011 Å when enforcing two- and three-particle conditions, respectively, whereas CI-CASSCF displays an average deviation of 0.010 Å. The v2RDM-CASSCF approach with two-particle conditions is also applied to the equilibrium geometry of pentacene; optimized bond lengths deviate from those derived from experiment, on average, by 0.015 Å when using a cc-pVDZ basis set and a (22e,22o) active space.

  3. Multi-Site λ-dynamics for simulated Structure-Activity Relationship studies

    PubMed Central

    Knight, Jennifer L.; Brooks, Charles L.

    2011-01-01

    Multi-Site λ-dynamics (MSλD) is a new free energy simulation method that is based on λ-dynamics. It has been developed to enable multiple substituents at multiple sites on a common ligand core to be modeled simultaneously and their free energies assessed. The efficacy of MSλD for estimating relative hydration free energies and relative binding affinties is demonstrated using three test systems. Model compounds representing multiple identical benzene, dihydroxybenzene and dimethoxybenzene molecules show total combined MSλD trajectory lengths of ~1.5 ns are sufficient to reliably achieve relative hydration free energy estimates within 0.2 kcal/mol and are less sensitive to the number of trajectories that are used to generate these estimates for hybrid ligands that contain up to ten substituents modeled at a single site or five substituents modeled at each of two sites. Relative hydration free energies among six benzene derivatives calculated from MSλD simulations are in very good agreement with those from alchemical free energy simulations (with average unsigned differences of 0.23 kcal/mol and R2=0.991) and experiment (with average unsigned errors of 1.8 kcal/mol and R2=0.959). Estimates of the relative binding affinities among 14 inhibitors of HIV-1 reverse transcriptase obtained from MSλD simulations are in reasonable agreement with those from traditional free energy simulations and experiment (average unsigned errors of 0.9 kcal/mol and R2=0.402). For the same level of accuracy and precision MSλD simulations are achieved ~20–50 times faster than traditional free energy simulations and thus with reliable force field parameters can be used effectively to screen tens to hundreds of compounds in structure-based drug design applications. PMID:22125476

  4. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    PubMed

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0.2 D in dipole moments, and ∼4 kcal/mol in Zn-ligand bond energies) that cannot be neglected for accurate modeling, but the same density functionals that do well in all-electron nonrelativistic calculations do well with relativistic effective core potentials. Although most tests are carried out with augmented polarized triple-ζ basis sets, we also carried out some tests with an augmented polarized double-ζ basis set, and we found, on average, that with the smaller basis set DFT has no loss in accuracy for dipole moments and only ∼10% less accurate bond lengths.

  5. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  6. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  7. Using polarizable POSSIM force field and fuzzy-border continuum solvent model to calculate pK(a) shifts of protein residues.

    PubMed

    Sharma, Ity; Kaminski, George A

    2017-01-15

    Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. LOGISMOS-B for primates: primate cortical surface reconstruction and thickness measurement

    NASA Astrophysics Data System (ADS)

    Oguz, Ipek; Styner, Martin; Sanchez, Mar; Shi, Yundi; Sonka, Milan

    2015-03-01

    Cortical thickness and surface area are important morphological measures with implications for many psychiatric and neurological conditions. Automated segmentation and reconstruction of the cortical surface from 3D MRI scans is challenging due to the variable anatomy of the cortex and its highly complex geometry. While many methods exist for this task in the context of the human brain, these methods are typically not readily applicable to the primate brain. We propose an innovative approach based on our recently proposed human cortical reconstruction algorithm, LOGISMOS-B, and the Laplace-based thickness measurement method. Quantitative evaluation of our approach was performed based on a dataset of T1- and T2-weighted MRI scans from 12-month-old macaques where labeling by our anatomical experts was used as independent standard. In this dataset, LOGISMOS-B has an average signed surface error of 0.01 +/- 0.03mm and an unsigned surface error of 0.42 +/- 0.03mm over the whole brain. Excluding the rather problematic temporal pole region further improves unsigned surface distance to 0.34 +/- 0.03mm. This high level of accuracy reached by our algorithm even in this challenging developmental dataset illustrates its robustness and its potential for primate brain studies.

  9. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  10. The 6-31B(d) basis set and the BMC-QCISD and BMC-CCSD multicoefficient correlation methods.

    PubMed

    Lynch, Benjamin J; Zhao, Yan; Truhlar, Donald G

    2005-03-03

    Three new multicoefficient correlation methods (MCCMs) called BMC-QCISD, BMC-CCSD, and BMC-CCSD-C are optimized against 274 data that include atomization energies, electron affinities, ionization potentials, and reaction barrier heights. A new basis set called 6-31B(d) is developed and used as part of the new methods. BMC-QCISD has mean unsigned errors in calculating atomization energies per bond and barrier heights of 0.49 and 0.80 kcal/mol, respectively. BMC-CCSD has mean unsigned errors of 0.42 and 0.71 kcal/mol for the same two quantities. BMC-CCSD-C is an equally effective variant of BMC-CCSD that employs Cartesian rather than spherical harmonic basis sets. The mean unsigned error of BMC-CCSD or BMC-CCSD-C for atomization energies, barrier heights, ionization potentials, and electron affinities is 22% lower than G3SX(MP2) at an order of magnitude less cost for gradients for molecules with 9-13 atoms, and it scales better (N6 vs N,7 where N is the number of atoms) when the size of the molecule is increased.

  11. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  12. Non-neutralized Electric Currents in Solar Active Regions and Flare Productivity

    NASA Astrophysics Data System (ADS)

    Kontogiannis, Ioannis; Georgoulis, Manolis K.; Park, Sung-Hong; Guerra, Jordan A.

    2017-11-01

    We explore the association of non-neutralized currents with solar flare occurrence in a sizable sample of observations, aiming to show the potential of such currents in solar flare prediction. We used the high-quality vector magnetograms that are regularly produced by the Helioseismic Magnetic Imager, and more specifically, the Space weather HMI Active Region Patches (SHARP). Through a newly established method that incorporates detailed error analysis, we calculated the non-neutralized currents contained in active regions (AR). Two predictors were produced, namely the total and the maximum unsigned non-neutralized current. Both were tested in AR time-series and a representative sample of point-in-time observations during the interval 2012 - 2016. The average values of non-neutralized currents in flaring active regions are higher by more than an order of magnitude than in non-flaring regions and correlate very well with the corresponding flare index. The temporal evolution of these parameters appears to be connected to physical processes, such as flux emergence and/or magnetic polarity inversion line formation, that are associated with increased solar flare activity. Using Bayesian inference of flaring probabilities, we show that the total unsigned non-neutralized current significantly outperforms the total unsigned magnetic flux and other well-established current-related predictors. It therefore shows good prospects for inclusion in an operational flare-forecasting service. We plan to use the new predictor in the framework of the FLARECAST project along with other highly performing predictors.

  13. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  14. Do Practical Standard Coupled Cluster Calculations Agree Better than Kohn–Sham Calculations with Currently Available Functionals When Compared to the Best Available Experimental Data for Dissociation Energies of Bonds to 3d Transition Metals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng

    2015-05-12

    Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less

  15. Pavlovian contingencies and temporal information.

    PubMed

    Balsam, Peter D; Fairhurst, Stephen; Gallistel, Charles R

    2006-07-01

    The effects of altering the contingency between the conditioned stimulus (CS) and the unconditioned stimulus (US) on the acquisition of autoshaped responding was investigated by changing the frequency of unsignaled USs during the intertrial interval. The addition of the unsignaled USs had an effect on acquisition speed comparable with that of massing trials. The effects of these manipulations can be understood in terms of their effect on the amount of information (number of bits) that the average CS conveys to the subject about the timing of the next US. The number of reinforced CSs prior to acquisition is inversely related to the information content of the CS.

  16. Hydrogen bonding and pi-stacking: how reliable are force fields? A critical evaluation of force field descriptions of nonbonded interactions.

    PubMed

    Paton, Robert S; Goodman, Jonathan M

    2009-04-01

    We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.

  17. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile.

    PubMed

    Zanith, Caroline C; Pliego, Josefredo R

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol(-1) in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol(-1), respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  18. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile

    NASA Astrophysics Data System (ADS)

    Zanith, Caroline C.; Pliego, Josefredo R.

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol-1 in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol-1, respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  19. Accuracy of free energies of hydration using CM1 and CM3 atomic charges.

    PubMed

    Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L

    2004-08-01

    Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.

  20. Calculating pKa values for substituted phenols and hydration energies for other compounds with the first-order Fuzzy-Border continuum solvation model

    PubMed Central

    Sharma, Ity; Kaminski, George A.

    2012-01-01

    We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192

  1. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task.

    PubMed

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-03-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior. © The Author(s) 2016.

  2. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task

    PubMed Central

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-01-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior. PMID:26755547

  3. Electric field theory based approach to search-direction line definition in image segmentation: application to optimal femur-tibia cartilage segmentation in knee-joint 3-D MR

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Sonka, M.

    2010-03-01

    A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).

  4. Assessing the Accuracy of Density Functional and Semiempirical Wave Function Methods for Water Nanoparticles: Comparing Binding and Relative Energies of (H2O)16 and (H2O)17 to CCSD(T) Results.

    PubMed

    Leverentz, Hannah R; Qi, Helena W; Truhlar, Donald G

    2013-02-12

    The binding energies and relative conformational energies of five configurations of the water 16-mer are computed using 61 levels of density functional (DF) theory, 12 methods combining DF theory with molecular mechanics damped dispersion (DF-MM), seven semiempirical-wave function (SWF) methods, and five methods combining SWF theory with molecular mechanics damped dispersion (SWF-MM). The accuracies of the computed energies are assessed by comparing them to recent high-level ab initio results; this assessment is more relevant to bulk water than previous tests on small clusters because a 16-mer is large enough to have water molecules that participate in more than three hydrogen bonds. We find that for water 16-mer binding energies the best DF, DF-MM, SWF, and SWF-MM methods (and their mean unsigned errors in kcal/mol) are respectively M06-2X (1.6), ωB97X-D (2.3), SCC-DFTB-γ(h) (35.2), and PM3-D (3.2). We also mention the good performance of CAM-B3LYP (1.8), M05-2X (1.9), and TPSSLYP (3.0). In contrast, for relative energies of various water nanoparticle 16-mer structures, the best methods (and mean unsigned errors in kcal/mol), in the same order of classes of methods, are SOGGA11-X (0.3), ωB97X-D (0.2), PM6 (0.4), and PMOv1 (0.6). We also mention the good performance of LC-ωPBE-D3 (0.3) and ωB97X (0.4). When both relative and binding energies are taken into consideration, the best methods overall (out of the 85 tested) are M05-2X without molecular mechanics and ωB97X-D when molecular mechanics corrections are included; with considerably higher average errors and considerably lower cost, the best SWF or SWF-MM method is PMOv1. We use six of the best methods for binding energies of the water 16-mers to calculate the binding energies of water hexamers and water 17-mers to test whether these methods are also reliable for binding energy calculations on other types of water clusters.

  5. Unsignaled Delay of Reinforcement, Relative Time, and Resistance to Change

    ERIC Educational Resources Information Center

    Shahan, Timothy A.; Lattal, Kennon A.

    2005-01-01

    Two experiments with pigeons examined the effects of unsignaled, nonresetting delays of reinforcement on responding maintained by different reinforcement rates. In Experiment 1, 3-s unsignaled delays were introduced into each component of a multiple variable-interval (VI) 15-s VI 90-s VI 540-s schedule. When considered as a proportion of the…

  6. Application of Molecular Dynamics Simulations in Molecular Property Prediction II: Diffusion Coefficient

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2011-01-01

    In this work, we have evaluated how well the General AMBER force field (GAFF) performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, 5 organic compounds in aqueous solutions, 4 proteins in aqueous solutions, and 9 organic compounds in non-aqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned error (AUE) and the root-mean-square error (RMSE) are 0.137 and 0.171 ×10−5 cm−2s−1, respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for 8 organic solvents with experimental data (R2 = 0.784), 4 proteins in aqueous solutions (R2 = 0.996) and 9 organic compounds in non-aqueous solutions (R2 = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide (DMSO) and cyclohexane have been studied. The major MD settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement (MSD) collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. PMID:21953689

  7. Automatic short axis orientation of the left ventricle in 3D ultrasound recordings

    NASA Astrophysics Data System (ADS)

    Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan

    2016-04-01

    The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.

  8. Algorithms for sorting unsigned linear genomes by the DCJ operations.

    PubMed

    Jiang, Haitao; Zhu, Binhai; Zhu, Daming

    2011-02-01

    The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.

  9. Automated segmentation of intraretinal layers from macular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Haeker, Mona; Sonka, Milan; Kardon, Randy; Shah, Vinay A.; Wu, Xiaodong; Abràmoff, Michael D.

    2007-03-01

    Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated series (up to six, when available) were acquired for each eye and were first registered and averaged together, resulting in a composite image for each angular location. The six surfaces defining the five layers were then found on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori-determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries were independently defined by two human experts on one raw scan of each eye. Using the average of the experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 +/- 4.0 μm, with five of the six surfaces showing significantly lower mean errors than those computed between the two observers (p < 0.05, pixel size of 50 × 2 μm).

  10. Application of molecular dynamics simulations in molecular property prediction II: diffusion coefficient.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2011-12-01

    In this work, we have evaluated how well the general assisted model building with energy refinement (AMBER) force field performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, five organic compounds in aqueous solutions, four proteins in aqueous solutions, and nine organic compounds in nonaqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned errors and the root mean square errors are 0.137 and 0.171 × 10(-5) cm(-2) s(-1), respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for eight organic solvents with experimental data (R(2) = 0.784), four proteins in aqueous solutions (R(2) = 0.996), and nine organic compounds in nonaqueous solutions (R(2) = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide, and cyclohexane have been studied. The major molecular dynamics (MD) settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. Copyright © 2011 Wiley Periodicals, Inc.

  11. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  12. Application of Molecular Dynamics Simulations in Molecular Property Prediction I: Density and Heat of Vaporization

    PubMed Central

    Wang, Junmei; Tingjun, Hou

    2011-01-01

    Molecular mechanical force field (FF) methods are useful in studying condensed phase properties. They are complementary to experiment and can often go beyond experiment in atomic details. Even a FF is specific for studying structures, dynamics and functions of biomolecules, it is still important for the FF to accurately reproduce the experimental liquid properties of small molecules that represent the chemical moieties of biomolecules. Otherwise, the force field may not describe the structures and energies of macromolecules in aqueous solutions properly. In this work, we have carried out a systematic study to evaluate the General AMBER Force Field (GAFF) in studying densities and heats of vaporization for a large set of organic molecules that covers the most common chemical functional groups. The latest techniques, such as the particle mesh Ewald (PME) for calculating electrostatic energies, and Langevin dynamics for scaling temperatures, have been applied in the molecular dynamics (MD) simulations. For density, the average percent error (APE) of 71 organic compounds is 4.43% when compared to the experimental values. More encouragingly, the APE drops to 3.43% after the exclusion of two outliers and four other compounds for which the experimental densities have been measured with pressures higher than 1.0 atm. For heat of vaporization, several protocols have been investigated and the best one, P4/ntt0, achieves an average unsigned error (AUE) and a root-mean-square error (RMSE) of 0.93 and 1.20 kcal/mol, respectively. How to reduce the prediction errors through proper van der Waals (vdW) parameterization has been discussed. An encouraging finding in vdW parameterization is that both densities and heats of vaporization approach their “ideal” values in a synchronous fashion when vdW parameters are tuned. The following hydration free energy calculation using thermodynamic integration further justifies the vdW refinement. We conclude that simple vdW parameterization can significantly reduce the prediction errors. We believe that GAFF can greatly improve its performance in predicting liquid properties of organic molecules after a systematic vdW parameterization, which will be reported in a separate paper. PMID:21857814

  13. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  14. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    DTIC Science & Technology

    2011-10-01

    0 4 : if FIFO1 contains nDt frames then 5: Partition data into blocks. 6: Put SPE control block information...char buf 4 = FF; vec to r unsigned char buf 5 = FF; vec to r unsigned char buf 6 = FF; vec to r unsigned char buf 7 = FF; for ( j = 0 ; j < s i z e ; j...Public Release; Distribution Unlimited 8 7 u 6 ill :J (;) 5 E -;::; c 0 4 ~ u Q) X 3 Q) 8 7 6 u Q) Ul 5 :J (;) E :;::; 4 c 0

  15. Safety and Guidelines for Marked and Unmarked Pedestrian Crosswalks at Unsignalized Intersections in Nevada

    DOT National Transportation Integrated Search

    2012-09-01

    This report examines two aspects of marked and unmarked crosswalks at unsignalized intersections. Firstly, the report assesses the safety performance of marked/unmarked crosswalks in Nevada through comparing pedestrian-related crash rates. In which, ...

  16. 10. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRINT #11, THIRD FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  17. 11. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRINT #11, FOURTH FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  18. 9. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRING #11, SECOND FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  19. 8. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRINT #11, FIRST-FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  20. Safety effects of unsignalized superstreets in North Carolina.

    PubMed

    Ott, Sarah E; Haley, Rebecca L; Hummer, Joseph E; Foyle, Robert S; Cunningham, Christopher M

    2012-03-01

    Arterials across the United States are experiencing far too many collisions. Agencies tasked with improving these arterials have few available effective solutions. Superstreets, called restricted crossing u-turns by the Federal Highway Administration (FHWA), are part of a menu of unconventional arterial intersection designs that may provide a promising solution. Up to this point, there is little valid information available on the safety effects of superstreets, as study results have been from basic analyses that only account for traffic volume changes. The purpose of this research was to determine the safety effects of the unsignalized superstreet countermeasure on existing arterials in North Carolina. The safety study involved traffic flow adjustment, comparison-group, and Empirical Bayes analyses of 13 unsignalized superstreet intersections in North Carolina. The superstreets have been installed in the last few years across the state as opportunities presented themselves, but not necessarily at the most hazardous sites. The unsignalized superstreet countermeasure showed a significant reduction in total, angle and right turn, and left turn collisions in all analyses. Analyses also showed a significant reduction in fatal and injury collisions. The authors recommend that future analysts use a crash modification factor of 46 percent when considering the conversion of a typical unsignalized arterial intersection into a superstreet. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. PHYSICAL PROPERTIES OF LARGE AND SMALL GRANULES IN SOLAR QUIET REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Xie Zongxia; Hu Qinghua

    The normal mode observations of seven quiet regions obtained by the Hinode spacecraft are analyzed to study the physical properties of granules. An artificial intelligence technique is introduced to automatically find the spatial distribution of granules in feature spaces. In this work, we investigate the dependence of granular continuum intensity, mean Doppler velocity, and magnetic fields on granular diameter. We recognized 71,538 granules by an automatic segmentation technique and then extracted five properties: diameter, continuum intensity, Doppler velocity, and longitudinal and transverse magnetic flux density to describe the granules. To automatically explore the intrinsic structures of the granules in themore » five-dimensional parameter space, the X-means clustering algorithm and one-rule classifier are introduced to define the rules for classifying the granules. It is found that diameter is a dominating parameter in classifying the granules and two families of granules are derived: small granules with diameters smaller than 1.''44, and large granules with diameters larger than 1.''44. Based on statistical analysis of the detected granules, the following results are derived: (1) the averages of diameter, continuum intensity, and Doppler velocity in the upward direction of large granules are larger than those of small granules; (2) the averages of absolute longitudinal, transverse, and unsigned flux density of large granules are smaller than those of small granules; (3) for small granules, the average of continuum intensity increases with their diameters, while the averages of Doppler velocity, transverse, absolute longitudinal, and unsigned magnetic flux density decrease with their diameters. However, the mean properties of large granules are stable; (4) the intensity distributions of all granules and small granules do not satisfy Gaussian distribution, while that of large granules almost agrees with normal distribution with a peak at 1.04 I{sub 0}.« less

  2. Performance of Frozen Density Embedding for Modeling Hole Transfer Reactions.

    PubMed

    Ramos, Pablo; Papadakis, Markos; Pavanello, Michele

    2015-06-18

    We have carried out a thorough benchmark of the frozen density-embedding (FDE) method for calculating hole transfer couplings. We have considered 10 exchange-correlation functionals, 3 nonadditive kinetic energy functionals, and 3 basis sets. Overall, we conclude that with a 7% mean relative unsigned error, the PBE and PW91 functionals coupled with the PW91k nonadditive kinetic energy functional and a TZP basis set constitute the most stable and accurate levels of theory for hole transfer coupling calculations. The FDE-ET method is found to be an excellent tool for computing diabatic couplings for hole transfer reactions.

  3. Developing a short range vehicle to infrastructure communication system to enhance the safety at STOP sign intersections : final report.

    DOT National Transportation Integrated Search

    2016-05-01

    Stop sign controlled unsignalized intersections raise a public safe concern. Even though various strategies, such as engineering, education, and policy, have been applied in practice, there are a number of fatal crashes occurred at unsignalized inter...

  4. Contingency Tracking during Unsignaled Delayed Reinforcement

    ERIC Educational Resources Information Center

    Keely, Josue; Feola, Tyler; Lattal, Kennon A.

    2007-01-01

    Three experiments were conducted with rats in which responses on one lever (labeled the functional lever) produced reinforcers after an unsignaled delay period that reset with each response during the delay. Responses on a second, nonfunctional, lever did not initiate delays, but, in the first and third experiments, such responses during the last…

  5. Comparing alchemical and physical pathway methods for computing the absolute binding free energy of charged ligands.

    PubMed

    Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald

    2018-06-13

    Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.

  6. Toward polarizable AMOEBA thermodynamics at fixed charge efficiency using a dual force field approach: application to organic crystals.

    PubMed

    Nessler, Ian J; Litman, Jacob M; Schnieders, Michael J

    2016-11-09

    First principles prediction of the structure, thermodynamics and solubility of organic molecular crystals, which play a central role in chemical, material, pharmaceutical and engineering sciences, challenges both potential energy functions and sampling methodologies. Here we calculate absolute crystal deposition thermodynamics using a novel dual force field approach whose goal is to maintain the accuracy of advanced multipole force fields (e.g. the polarizable AMOEBA model) while performing more than 95% of the sampling in an inexpensive fixed charge (FC) force field (e.g. OPLS-AA). Absolute crystal sublimation/deposition phase transition free energies were determined using an alchemical path that grows the crystalline state from a vapor reference state based on sampling with the OPLS-AA force field, followed by dual force field thermodynamic corrections to change between FC and AMOEBA resolutions at both end states (we denote the three step path as AMOEBA/FC). Importantly, whereas the phase transition requires on the order of 200 ns of sampling per compound, only 5 ns of sampling was needed for the dual force field thermodynamic corrections to reach a mean statistical uncertainty of 0.05 kcal mol -1 . For five organic compounds, the mean unsigned error between direct use of AMOEBA and the AMOEBA/FC dual force field path was only 0.2 kcal mol -1 and not statistically significant. Compared to experimental deposition thermodynamics, the mean unsigned error for AMOEBA/FC (1.4 kcal mol -1 ) was more than a factor of two smaller than uncorrected OPLS-AA (3.2 kcal mol -1 ). Overall, the dual force field thermodynamic corrections reduced condensed phase sampling in the expensive force field by a factor of 40, and may prove useful for protein stability or binding thermodynamics in the future.

  7. PRODUCTIVITY OF SOLAR FLARES AND MAGNETIC HELICITY INJECTION IN ACTIVE REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sung-hong; Wang Haimin; Chae, Jongchul, E-mail: sp295@njit.ed

    The main objective of this study is to better understand how magnetic helicity injection in an active region (AR) is related to the occurrence and intensity of solar flares. We therefore investigate the magnetic helicity injection rate and unsigned magnetic flux, as a reference. In total, 378 ARs are analyzed using SOHO/MDI magnetograms. The 24 hr averaged helicity injection rate and unsigned magnetic flux are compared with the flare index and the flare-productive probability in the next 24 hr following a measurement. In addition, we study the variation of helicity over a span of several days around the times ofmore » the 19 flares above M5.0 which occurred in selected strong flare-productive ARs. The major findings of this study are as follows: (1) for a sub-sample of 91 large ARs with unsigned magnetic fluxes in the range from (3-5) x 10{sup 22} Mx, there is a difference in the magnetic helicity injection rate between flaring ARs and non-flaring ARs by a factor of 2; (2) the GOES C-flare-productive probability as a function of helicity injection displays a sharp boundary between flare-productive ARs and flare-quiet ones; (3) the history of helicity injection before all the 19 major flares displayed a common characteristic: a significant helicity accumulation of (3-45) x 10{sup 42} Mx{sup 2} during a phase of monotonically increasing helicity over 0.5-2 days. Our results support the notion that helicity injection is important in flares, but it is not effective to use it alone for the purpose of flare forecast. It is necessary to find a way to better characterize the time history of helicity injection as well as its spatial distribution inside ARs.« less

  8. Productivity of Solar Flares and Magnetic Helicity Injection in Active Regions

    NASA Astrophysics Data System (ADS)

    Park, Sung-hong; Chae, Jongchul; Wang, Haimin

    2010-07-01

    The main objective of this study is to better understand how magnetic helicity injection in an active region (AR) is related to the occurrence and intensity of solar flares. We therefore investigate the magnetic helicity injection rate and unsigned magnetic flux, as a reference. In total, 378 ARs are analyzed using SOHO/MDI magnetograms. The 24 hr averaged helicity injection rate and unsigned magnetic flux are compared with the flare index and the flare-productive probability in the next 24 hr following a measurement. In addition, we study the variation of helicity over a span of several days around the times of the 19 flares above M5.0 which occurred in selected strong flare-productive ARs. The major findings of this study are as follows: (1) for a sub-sample of 91 large ARs with unsigned magnetic fluxes in the range from (3-5) × 1022 Mx, there is a difference in the magnetic helicity injection rate between flaring ARs and non-flaring ARs by a factor of 2; (2) the GOES C-flare-productive probability as a function of helicity injection displays a sharp boundary between flare-productive ARs and flare-quiet ones; (3) the history of helicity injection before all the 19 major flares displayed a common characteristic: a significant helicity accumulation of (3-45) × 1042 Mx2 during a phase of monotonically increasing helicity over 0.5-2 days. Our results support the notion that helicity injection is important in flares, but it is not effective to use it alone for the purpose of flare forecast. It is necessary to find a way to better characterize the time history of helicity injection as well as its spatial distribution inside ARs.

  9. Resistance to Change of Responding Maintained by Unsignaled Delays to Reinforcement: A Response-Bout Analysis

    ERIC Educational Resources Information Center

    Podlesnik, Christopher A.; Jimenez-Gomez, Corina; Ward, Ryan D.; Shahan, Timothy A.

    2006-01-01

    Previous experiments have shown that unsignaled delayed reinforcement decreases response rates and resistance to change. However, the effects of different delays to reinforcement on underlying response structure have not been investigated in conjunction with tests of resistance to change. In the present experiment, pigeons responded on a…

  10. Signaled and Unsignaled Terminal Links in Concurrent Chains I: Effects of Reinforcer Probability and Immediacy

    ERIC Educational Resources Information Center

    Mattson, Karla M.; Hucks, Andrew; Grace, Randolph C.; McLean, Anthony P.

    2010-01-01

    Eight pigeons responded in a three-component concurrent-chains procedure, with either independent or dependent initial links. Relative probability and immediacy of reinforcement in the terminal links were both varied, and outcomes on individual trials (reinforcement or nonreinforcement) were either signaled or unsignaled. Terminal-link fixed-time…

  11. Context Blocking in Rat Autoshaping: Sign-Tracking versus Goal-Tracking

    ERIC Educational Resources Information Center

    Costa, Daniel S. J.; Boakes, Robert A.

    2009-01-01

    Prior experience of unsignaled food can interfere with subsequent acquisition by birds of autoshaped key-pecking at a signal light. This has been understood to indicate that unsignaled food results in context conditioning, which blocks subsequent learning about the keylight-food relationship. In the present experiment with rats lever insertion as…

  12. Analysis of Crossing Speed of the Pedestrians in Marked and Unmarked Crosswalks in the Signalized and Un-Signalized Intersections (Case Study: Rasht city)

    NASA Astrophysics Data System (ADS)

    Behbahani, Hamid; Najafi Moghaddam Gilani, Vahid; Jahangir Samet, Mehdi; Salehfard, Reza

    2017-10-01

    Pedestrians affect the traffic in the signalized and un-signalized intersections. Therefore, identifying the behavioural features of the pedestrians is of great importance and may result in better designing facilities for them. In this study, by shooting the four intersections in Rasht for 15 hours and inventory from 4568 pedestrians, crossing speed of the pedestrians in the marked crosswalks and unmarked crosswalks was evaluated and analysed. Results showed that pedestrians‧ crossing speed in the marked crosswalks is higher than their crossing speed in the unmarked crosswalks in both signalized and un-signalized intersections. Moreover, in the unmarked crosswalks in the signalized intersections, 15th percentile speed of male pedestrians, female pedestrians and group of pedestrians’ decrease 6.4%, 5.4% and 12.2%, respectively, compared with the 15th percentile speed in the marked crosswalks. Above-mentioned values in the unmarked crosswalks in the un-signalized intersections for male pedestrians, female pedestrians, and group of pedestrians decrease 1.2%, 3.8%, and 1.4%, respectively.

  13. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    NASA Astrophysics Data System (ADS)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  14. A Generalized DIF Effect Variance Estimator for Measuring Unsigned Differential Test Functioning in Mixed Format Tests

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Algina, James

    2006-01-01

    One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…

  15. The charger transfer electronic coupling in diabatic perspective: A multi-state density functional theory study

    NASA Astrophysics Data System (ADS)

    Guo, Xinwei; Qu, Zexing; Gao, Jiali

    2018-01-01

    The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.

  16. Pole-strength of the earth from Magsat and magnetic determination of the core radius

    NASA Technical Reports Server (NTRS)

    Voorhies, G. V.; Benton, E. R.

    1982-01-01

    A model based on two days of Magsat data is used to numerically evaluate the unsigned magnetic flux linking the earth's surface, and a comparison of the 16.054 GWb value calculated with values from earlier geomagnetic field models reveals a smooth, monotonic, and recently-accelerating decrease in the earth's pole strength at a 50-year average rate of 8.3 MWb, or 0.052%/year. Hide's (1978) magnetic technique for determining the radius of the earth's electrically-conducting core is tested by (1) extrapolating main field models for 1960 and 1965 downward through the nearly-insulating mantle, and then separately comparing them to equivalent, extrapolated models of Magsat data. The two unsigned fluxes are found to equal the Magsat values at a radius which is within 2% of the core radius; and (2) the 1960 main field and secular variation and acceleration coefficients are used to derive models of 1930, 1940 and 1950. The same core magnetic radius value, within 2% of the seismic value, is obtained. It is concluded that the mantle is a nearly-perfect insulator, while the core is a perfect conductor, on the decade time scale.

  17. Evaluation of fixed momentary dro schedules under signaled and unsignaled arrangements.

    PubMed

    Hammond, Jennifer L; Iwata, Brian A; Fritz, Jennifer N; Dempsey, Carrie M

    2011-01-01

    Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response requirement of the schedule per se or (b) discrimination of the contingency made more salient by the signal. To separate these two potential influences, we compared the effects of signaled versus unsignaled FM DRO with 4 individuals with developmental disabilities whose problem behavior was maintained by social-positive reinforcement. During signaled FM DRO, the experimenter presented a visual stimulus 3 s prior to the end of the DRO interval and delivered reinforcement contingent on the absence of problem behavior at the second the interval elapsed. Unsignaled DRO was identical except that interval termination was not signaled. Results indicated that signaled FM DRO was effective in decreasing 2 subjects' problem behavior, whereas an unsignaled schedule was required for the remaining 2 subjects. These results suggest that the response requirement per se of FM DRO may not be problematic if it is not easily discriminated.

  18. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  19. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    PubMed

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.

  20. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action

    PubMed Central

    Roesch, Matthew R.

    2017-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036

  1. Optimization of parameters for semiempirical methods V: Modification of NDDO approximations and application to 70 elements

    PubMed Central

    2007-01-01

    Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol−1. For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol−1. The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6–31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6–31G*: 7.4, and AM1: 10.0 kcal mol−1. Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries. Figure Calculated structure of the complex ion [Ta6Cl12]2+ (footnote): Reference value in parenthesis Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0233-4) contains supplementary material, which is available to authorized users. PMID:17828561

  2. The Linear Interaction Energy Method for the Prediction of Protein Stability Changes Upon Mutation

    PubMed Central

    Wickstrom, Lauren; Gallicchio, Emilio; Levy, Ronald M.

    2011-01-01

    The coupling of protein energetics and sequence changes is a critical aspect of computational protein design, as well as for the understanding of protein evolution, human disease, and drug resistance. In order to study the molecular basis for this coupling, computational tools must be sufficiently accurate and computationally inexpensive enough to handle large amounts of sequence data. We have developed a computational approach based on the linear interaction energy (LIE) approximation to predict the changes in the free energy of the native state induced by a single mutation. This approach was applied to a set of 822 mutations in 10 proteins which resulted in an average unsigned error of 0.82 kcal/mol and a correlation coefficient of 0.72 between the calculated and experimental ΔΔG values. The method is able to accurately identify destabilizing hot spot mutations however it has difficulty in distinguishing between stabilizing and destabilizing mutations due to the distribution of stability changes for the set of mutations used to parameterize the model. In addition, the model also performs quite well in initial tests on a small set of double mutations. Based on these promising results, we can begin to examine the relationship between protein stability and fitness, correlated mutations, and drug resistance. PMID:22038697

  3. An extended car-following model at un-signalized intersections under V2V communication environment

    PubMed Central

    Wang, Tao; Li, Peng

    2018-01-01

    An extended car-following model is proposed in this paper to analyze the impacts of V2V (vehicle to vehicle) communication on the micro driving behavior at the un-signalized intersection. A four-leg un-signalized intersection with twelve streams (left-turn, through movement, and right turn from each leg) is used. The effect of the guidance strategy on the reduction of the rate of stops and total delay is explored by comparing the proposed model and the traditional FVD car-following model. The numerical results illustrate that potential conflicts between vehicles can be predicted and some stops can be avoided by decelerating in advance. The driving comfort and traffic efficiency can be improved accordingly. More benefits could be obtained under the long communication range, low to medium traffic density, and simple traffic pattern conditions. PMID:29425243

  4. A comparative study of the structures and electronic properties of graphene fragments: A DFT and MP2 survey

    NASA Astrophysics Data System (ADS)

    de Carvalho, E. F. V.; Lopez-Castillo, A.; Roberto-Neto, O.

    2018-01-01

    Graphene can be viewed as sheet of benzene rings fused together forming a variety of structures including the trioxotriangulenes (TOTs) which is a class of organic molecules with electro-active properties. In order to clarify such properties, structures and electronic properties of the graphene fragments phenalenyl, triangulene, 6-oxophenalenoxyl, and X3TOT (X = H, F, Cl) are computed. Validation of the methodologies are carried out using the density functionals B3LYP, M06-2X, B2PLYP-D, and the MP2 theory, giving equilibrium geometries of benzene, naphthalene, and anthracene with mean unsigned error (MUE) of only 0.003, 0.007, 0.004, and 0.007 Å, respectively in relation to experiment.

  5. Frontal Theta Reflects Uncertainty and Unexpectedness during Exploration and Exploitation

    PubMed Central

    Figueroa, Christina M.; Cohen, Michael X; Frank, Michael J.

    2012-01-01

    In order to understand the exploitation/exploration trade-off in reinforcement learning, previous theoretical and empirical accounts have suggested that increased uncertainty may precede the decision to explore an alternative option. To date, the neural mechanisms that support the strategic application of uncertainty-driven exploration remain underspecified. In this study, electroencephalography (EEG) was used to assess trial-to-trial dynamics relevant to exploration and exploitation. Theta-band activities over middle and lateral frontal areas have previously been implicated in EEG studies of reinforcement learning and strategic control. It was hypothesized that these areas may interact during top-down strategic behavioral control involved in exploratory choices. Here, we used a dynamic reward–learning task and an associated mathematical model that predicted individual response times. This reinforcement-learning model generated value-based prediction errors and trial-by-trial estimates of exploration as a function of uncertainty. Mid-frontal theta power correlated with unsigned prediction error, although negative prediction errors had greater power overall. Trial-to-trial variations in response-locked frontal theta were linearly related to relative uncertainty and were larger in individuals who used uncertainty to guide exploration. This finding suggests that theta-band activities reflect prefrontal-directed strategic control during exploratory choices. PMID:22120491

  6. Noise and Sonic Boom Impact Technology. Initial Development of an Assessment System for Aircraft Noise (ASAN). Volume 3. Technical Description

    DTIC Science & Technology

    1989-06-01

    field width, it will be padded on the left (or right, if the left adjustment indicator has been given) to make up the field width. The padding character...is blank (space) normally, and zero if the field width was specified with a leading zero (this zero does not imply an octal field width). 25 " a...unsigned octal notation (without a leading zero ). ’x The argument is coverted to unsigned hexadecimal notation (without a leading Ox). * u The argument is

  7. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    PubMed Central

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  8. Impact of mutations on the allosteric conformational equilibrium

    PubMed Central

    Weinkam, Patrick; Chen, Yao Chi; Pons, Jaume; Sali, Andrej

    2012-01-01

    Allostery in a protein involves effector binding at an allosteric site that changes the structure and/or dynamics at a distant, functional site. In addition to the chemical equilibrium of ligand binding, allostery involves a conformational equilibrium between one protein substate that binds the effector and a second substate that less strongly binds the effector. We run molecular dynamics simulations using simple, smooth energy landscapes to sample specific ligand-induced conformational transitions, as defined by the effector-bound and unbound protein structures. These simulations can be performed using our web server: http://salilab.org/allosmod/. We then develop a set of features to analyze the simulations and capture the relevant thermodynamic properties of the allosteric conformational equilibrium. These features are based on molecular mechanics energy functions, stereochemical effects, and structural/dynamic coupling between sites. Using a machine-learning algorithm on a dataset of 10 proteins and 179 mutations, we predict both the magnitude and sign of the allosteric conformational equilibrium shift by the mutation; the impact of a large identifiable fraction of the mutations can be predicted with an average unsigned error of 1 kBT. With similar accuracy, we predict the mutation effects for an 11th protein that was omitted from the initial training and testing of the machine-learning algorithm. We also assess which calculated thermodynamic properties contribute most to the accuracy of the prediction. PMID:23228330

  9. Multiconfiguration pair-density functional theory: barrier heights and main group and transition metal energetics.

    PubMed

    Carlson, Rebecca K; Li Manni, Giovanni; Sonnenberger, Andrew L; Truhlar, Donald G; Gagliardi, Laura

    2015-01-13

    Kohn-Sham density functional theory, resting on the representation of the electronic density and kinetic energy by a single Slater determinant, has revolutionized chemistry, but for open-shell systems, the Kohn-Sham Slater determinant has the wrong symmetry properties as compared to an accurate wave function. We have recently proposed a theory, called multiconfiguration pair-density functional theory (MC-PDFT), in which the electronic kinetic energy and classical Coulomb energy are calculated from a multiconfiguration wave function with the correct symmetry properties, and the rest of the energy is calculated from a density functional, called the on-top density functional, that depends on the density and the on-top pair density calculated from this wave function. We also proposed a simple way to approximate the on-top density functional by translation of Kohn-Sham exchange-correlation functionals. The method is much less expensive than other post-SCF methods for calculating the dynamical correlation energy starting with a multiconfiguration self-consistent-field wave function as the reference wave function, and initial tests of the theory were quite encouraging. Here, we provide a broader test of the theory by applying it to bond energies of main-group molecules and transition metal complexes, barrier heights and reaction energies for diverse chemical reactions, proton affinities, and the water dimerization energy. Averaged over 56 data points, the mean unsigned error is 3.2 kcal/mol for MC-PDFT, as compared to 6.9 kcal/mol for Kohn-Sham theory with a comparable density functional. MC-PDFT is more accurate on average than complete active space second-order perturbation theory (CASPT2) for main-group small-molecule bond energies, alkyl bond dissociation energies, transition-metal-ligand bond energies, proton affinities, and the water dimerization energy.

  10. The Relationship Between X-Ray Radiance and Magnetic Flux

    NASA Astrophysics Data System (ADS)

    Pevtsov, Alexei A.; Fisher, George H.; Acton, Loren W.; Longcope, Dana W.; Johns-Krull, Christopher M.; Kankelborg, Charles C.; Metcalf, Thomas R.

    2003-12-01

    We use soft X-ray and magnetic field observations of the Sun (quiet Sun, X-ray bright points, active regions, and integrated solar disk) and active stars (dwarf and pre-main-sequence) to study the relationship between total unsigned magnetic flux, Φ, and X-ray spectral radiance, LX. We find that Φ and LX exhibit a very nearly linear relationship over 12 orders of magnitude, albeit with significant levels of scatter. This suggests a universal relationship between magnetic flux and the power dissipated through coronal heating. If the relationship can be assumed linear, it is consistent with an average volumetric heating rate Q~B/L, where B is the average field strength along a closed field line and L is its length between footpoints. The Φ-LX relationship also indicates that X-rays provide a useful proxy for the magnetic flux on stars when magnetic measurements are unavailable.

  11. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method.

    PubMed

    Sinha, Debalina; Pavanello, Michele

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.

  12. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinha, Debalina; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term themore » Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.« less

  13. SecureQEMU: Emulation-Based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    DTIC Science & Technology

    2008-12-01

    SHA256 DIGEST LENGTH) ) ; peAddSection(&sF i l e , " . S i g S t u b " , dwStubSecSize , dwStubSecSize ) ; 169 peSecure(&sF i l e , deqAddrSize...deqAuthPageAddrSize . s i z e ( ) /2) ∗ (8 + SHA256 DIGEST LENGTH) ) + 16 ; bCode [ 3 4 ] = ( ( char∗)&dwSize ) [ 0 ] ; bCode [ 3 5 ] = ( ( char∗)&dwSize ) [ 1...2) ∗ (8 + SHA256 DIGEST LENGTH... ) ) ; AES KEY aesKey ; unsigned char i v s a l t [ 1 6 ] , temp iv [ 1 6 ] ; 739 unsigned char ∗key

  14. Signal functions in delayed reinforcement

    PubMed Central

    Lattal, Kennon A.

    1984-01-01

    Three experiments were conducted with pigeons to examine the role of the signal in delay-of-reinforcement procedures. In the first, a blackout accompanying a period of nonreinforcement increased key-peck response rates maintained by immediate reinforcement. The effects of dissociating the blackout from the delay interval were examined in the second experiment. In three conditions, blackouts and unsignaled delays were negatively correlated or occurred randomly with respect to one another. A signaled delay and an unsignaled delay that omitted the blackouts were studied in two other conditions. All delay-of-reinforcement conditions generally produced response rates lower than those produced by immediate reinforcement. Signaled delays maintained higher response rates than did any of the various unsignaled-delay conditions, with or without dissociated blackouts. The effects of these latter conditions did not differ systematically from one another. The final experiment showed that response rates varied as a function of the frequency with which a blackout accompanied delay intervals. By eliminating a number of methodological difficulties present in previous delay-of-reinforcement experiments, these results suggest the importance of the signal in maintaining responding during delay-of-reinforcement procedures and, conversely, the importance of the delay interval in decreasing responding. PMID:16812387

  15. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    PubMed Central

    Gutiérrez, J. J.; Russell, James K.

    2016-01-01

    Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808

  16. Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation

    PubMed Central

    Yu, Kai; Shi, Fei; Gao, Enting; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian

    2018-01-01

    Optic nerve head (ONH) is a crucial region for glaucoma detection and tracking based on spectral domain optical coherence tomography (SD-OCT) images. In this region, the existence of a “hole” structure makes retinal layer segmentation and analysis very challenging. To improve retinal layer segmentation, we propose a 3D method for ONH centered SD-OCT image segmentation, which is based on a modified graph search algorithm with a shared-hole and locally adaptive constraints. With the proposed method, both the optic disc boundary and nine retinal surfaces can be accurately segmented in SD-OCT images. An overall mean unsigned border positioning error of 7.27 ± 5.40 µm was achieved for layer segmentation, and a mean Dice coefficient of 0.925 ± 0.03 was achieved for optic disc region detection. PMID:29541497

  17. Using non-empirically tuned range-separated functionals with simulated emission bands to model fluorescence lifetimes.

    PubMed

    Wong, Z C; Fan, W Y; Chwee, T S; Sullivan, Michael B

    2017-08-09

    Fluorescence lifetimes were evaluated using TD-DFT under different approximations for the emitting molecule and various exchange-correlation functionals, such as B3LYP, BMK, CAM-B3LYP, LC-BLYP, M06, M06-2X, M11, PBE0, ωB97, ωB97X, LC-BLYP*, and ωB97X* where the range-separation parameters in the last two functionals were tuned in a non-empirical fashion. Changes in the optimised molecular geometries between the ground and electronically excited states were found to affect the quality of the calculated lifetimes significantly, while the inclusion of vibronic features led to further improvements over the assumption of a vertical electronic transition. The LC-BLYP* functional was found to return the most accurate fluorescence lifetimes with unsigned errors that are mostly within 1.5 ns of experimental values.

  18. Universal solvation model based on solute electron density and on a continuum model of the solvent defined by the bulk dielectric constant and atomic surface tensions.

    PubMed

    Marenich, Aleksandr V; Cramer, Christopher J; Truhlar, Donald G

    2009-05-07

    We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the "D" stands for "density" to denote that the full solute electron density is used without defining partial atomic charges. "Continuum" denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where "universal" denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which a few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous organic solvents and water), and 143 transfer free energies for 93 neutral solutes between water and 15 organic solvents. The elements present in the solutes are H, C, N, O, F, Si, P, S, Cl, and Br. The SMD model employs a single set of parameters (intrinsic atomic Coulomb radii and atomic surface tension coefficients) optimized over six electronic structure methods: M05-2X/MIDI!6D, M05-2X/6-31G, M05-2X/6-31+G, M05-2X/cc-pVTZ, B3LYP/6-31G, and HF/6-31G. Although the SMD model has been parametrized using the IEF-PCM protocol for bulk electrostatics, it may also be employed with other algorithms for solving the nonhomogeneous Poisson equation for continuum solvation calculations in which the solute is represented by its electron density in real space. This includes, for example, the conductor-like screening algorithm. With the 6-31G basis set, the SMD model achieves mean unsigned errors of 0.6-1.0 kcal/mol in the solvation free energies of tested neutrals and mean unsigned errors of 4 kcal/mol on average for ions with either Gaussian03 or GAMESS.

  19. Universal Solvation Model Based on Solute Electron Density and on a Continuum Model of the Solvent Defined by the Bulk Dielectric Constant and Atomic Surface Tensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marenich, Aleksandr; Cramer, Christopher J; Truhlar, Donald G

    2009-04-30

    We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the “D” stands for “density” to denote that the full solute electron density is used without defining partial atomic charges. “Continuum” denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where “universal” denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which amore » few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous organic solvents and water), and 143 transfer free energies for 93 neutral solutes between water and 15 organic solvents. The elements present in the solutes are H, C, N, O, F, Si, P, S, Cl, and Br. The SMD model employs a single set of parameters (intrinsic atomic Coulomb radii and atomic surface tension coefficients) optimized over six electronic structure methods: M05-2X/MIDI!6D, M05-2X/6-31G*, M05-2X/6-31+G**, M05-2X/cc-pVTZ, B3LYP/6-31G*, and HF/6-31G*. Although the SMD model has been parametrized using the IEF-PCM protocol for bulk electrostatics, it may also be employed with other algorithms for solving the nonhomogeneous Poisson equation for continuum solvation calculations in which the solute is represented by its electron density in real space. This includes, for example, the conductor-like screening algorithm. With the 6-31G* basis set, the SMD model achieves mean unsigned errors of 0.6-1.0 kcal/mol in the solvation free energies of tested neutrals and mean unsigned errors of 4 kcal/mol on average for ions with either Gaussian03 or GAMESS.« less

  20. Signals, resistance to change, and conditioned reinforcement in a multiple schedule.

    PubMed

    Bell, Matthew C; Gomez, Belen E; Kessler, Kira

    2008-06-01

    The effect of signals on resistance to change was evaluated using pigeons responding on a three-component multiple schedule. Each component contained a variable-interval initial link followed by a fixed-time terminal link. One component was an unsignaled-delay schedule, and two were equivalent signaled-delay schedules. After baseline training, resistance to change was assessed through (a) extinction and (b) adding free food to the intercomponent interval. During these tests, the signal stimulus from one of the signaled-delay components (SIG-T) was replaced with the initial-link stimulus from that component, converting it to an unsignaled-delay schedule. That signal stimulus was added to the delay period of the unsignaled-delay component (UNS), converting it to a signaled-delay schedule. The remaining signaled component remained unchanged (SIG-C). Resistance-to-change tests showed removing the signal had a minimal effect on resistance to change in the SIG-T component compared to the unchanged SIG-C component except for one block during free-food testing. Adding the signal to the UNS component significantly increased response rates suggesting that component had low response strength. Interestingly, the direction of the effect was in the opposite direction from what is typically observed. Results are consistent with the conclusion that the signal functioned as a conditioned reinforcer and inconsistent with a generalization-decrement explanation.

  1. Semiempirical Quantum Chemistry Model for the Lanthanides: RM1 (Recife Model 1) Parameters for Dysprosium, Holmium and Erbium

    PubMed Central

    Filho, Manoel A. M.; Dutra, José Diogo L.; Rocha, Gerd B.; Simas, Alfredo M.; Freire, Ricardo O.

    2014-01-01

    Complexes of dysprosium, holmium, and erbium find many applications as single-molecule magnets, as contrast agents for magnetic resonance imaging, as anti-cancer agents, in optical telecommunications, etc. Therefore, the development of tools that can be proven helpful to complex design is presently an active area of research. In this article, we advance a major improvement to the semiempirical description of lanthanide complexes: the Recife Model 1, RM1, model for the lanthanides, parameterized for the trications of Dy, Ho, and Er. By representing such lanthanide in the RM1 calculation as a three-electron atom with a set of 5 d, 6 s, and 6 p semiempirical orbitals, the accuracy of the previous sparkle models, mainly concentrated on lanthanide-oxygen and lanthanide-nitrogen distances, is extended to other types of bonds in the trication complexes’ coordination polyhedra, such as lanthanide-carbon, lanthanide-chlorine, etc. This is even more important as, for example, lanthanide-carbon atom distances in the coordination polyhedra of the complexes comprise about 30% of all distances for all complexes of Dy, Ho, and Er considered. Our results indicate that the average unsigned mean error for the lanthanide-carbon distances dropped from an average of 0.30 Å, for the sparkle models, to 0.04 Å for the RM1 model for the lanthanides; for a total of 509 such distances for the set of all Dy, Ho, and Er complexes considered. A similar behavior took place for the other distances as well, such as lanthanide-chlorine, lanthanide-bromine, lanthanide, phosphorus and lanthanide-sulfur. Thus, the RM1 model for the lanthanides, being advanced in this article, broadens the range of application of semiempirical models to lanthanide complexes by including comprehensively many other types of bonds not adequately described by the previous models. PMID:24497945

  2. Water 16-mers and hexamers: assessment of the three-body and electrostatically embedded many-body approximations of the correlation energy or the nonlocal energy as ways to include cooperative effects.

    PubMed

    Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G

    2013-05-30

    This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation.

  3. Multipole moments in the effective fragment potential method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.

    In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less

  4. Multipole moments in the effective fragment potential method

    DOE PAGES

    Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.; ...

    2017-02-17

    In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less

  5. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  6. Eb&D: A new clustering approach for signed social networks based on both edge-betweenness centrality and density of subgraphs

    NASA Astrophysics Data System (ADS)

    Qi, Xingqin; Song, Huimin; Wu, Jianliang; Fuller, Edgar; Luo, Rong; Zhang, Cun-Quan

    2017-09-01

    Clustering algorithms for unsigned social networks which have only positive edges have been studied intensively. However, when a network has like/dislike, love/hate, respect/disrespect, or trust/distrust relationships, unsigned social networks with only positive edges are inadequate. Thus we model such kind of networks as signed networks which can have both negative and positive edges. Detecting the cluster structures of signed networks is much harder than for unsigned networks, because it not only requires that positive edges within clusters are as many as possible, but also requires that negative edges between clusters are as many as possible. Currently, we have few clustering algorithms for signed networks, and most of them requires the number of final clusters as an input while it is actually hard to predict beforehand. In this paper, we will propose a novel clustering algorithm called Eb &D for signed networks, where both the betweenness of edges and the density of subgraphs are used to detect cluster structures. A hierarchically nested system will be constructed to illustrate the inclusion relationships of clusters. To show the validity and efficiency of Eb &D, we test it on several classical social networks and also hundreds of synthetic data sets, and all obtain better results compared with other methods. The biggest advantage of Eb &D compared with other methods is that the number of clusters do not need to be known prior.

  7. Speed of CMEs and the Magnetic Non-Potentiality of Their Source ARs

    NASA Technical Reports Server (NTRS)

    Tiwari, Sanjiv K.; Falconer, David A.; Moore, Ronald L.; Venkatakrishnan, P.

    2014-01-01

    Most fast coronal mass ejections (CMEs) originate from solar active regions (ARs). Non-potentiality of ARs is expected to determine the speed and size of CMEs in the outer corona. Several other unexplored parameters might be important as well. To find out the correlation between the initial speed of CMEs and the non-potentiality of source ARs, we associated over a hundred of CMEs with source ARs via their co-produced flares. The speed of the CMEs are collected from the SOHO LASCO CME catalog. We have used vector magnetograms obtained mainly with HMI/SDO, also with Hinode (SOT/SP) when available within an hour of a CME occurrence, to evaluate various magnetic non-potentiality parameters, e.g. magnetic free-energy proxies, computed magnetic free energy, twist, shear angle, signed shear angle etc. We have also included several other parameters e.g. total unsigned flux, net current, magnetic area of ARs, area of sunspots, to investigate their correlation, if any, with the initial speeds of CMEs. Our preliminary results show that the ARs with larger non-potentiality and area mostly produce fast CMEs but they can also produce slower ones. The ARs with lesser non-potentiality and area generally produce only slower CMEs, however, there are a few exceptions. The total unsigned flux correlate with the non-potentiality parameters and area of ARs but some ARs with large unsigned flux are also found to be least non-potential. A more detailed analysis is underway.

  8. Application of Free Energy Perturbation for the Design of BACE1 Inhibitors.

    PubMed

    Ciordia, Myriam; Pérez-Benito, Laura; Delgado, Francisca; Trabanco, Andrés A; Tresadern, Gary

    2016-09-26

    Novel spiroaminodihydropyrroles probing for optimized interactions at the P3 pocket of β-secretase 1 (BACE1) were designed with the use of free energy perturbation (FEP) calculations. The resulting molecules showed pIC50 potencies in enzymatic BACE1 inhibition assays ranging from approximately 5 to 7. Good correlation was observed between the predicted activity from the FEP calculations and experimental activity. Simulations run with a default 5 ns approach delivered a mean unsigned error (MUE) between prediction and experiment of 0.58 and 0.91 kcal/mol for retrospective and prospective applications, respectively. With longer simulations of 10 and 20 ns, the MUE was in both cases 0.57 kcal/mol for the retrospective application, and 0.69 and 0.59 kcal/mol for the prospective application. Other considerations that impact the quality of the calculations are discussed. This work provides an example of the value of FEP as a computational tool for drug discovery.

  9. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    PubMed

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-02

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Simulation of Near-Edge X-ray Absorption Fine Structure with Time-Dependent Equation-of-Motion Coupled-Cluster Theory.

    PubMed

    Nascimento, Daniel R; DePrince, A Eugene

    2017-07-06

    An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.

  11. Left-Turn Bays

    DOT National Transportation Integrated Search

    1996-05-01

    The topic of left-turn bays (left-turn lanes) involves the following three issues: 1.) Warrants; 2.) Bay Length and 3.) Design Details. This discussion paper deals with warrants and bay length -- including queue storage at signalized and unsignalized...

  12. Intersection collision warning system

    DOT National Transportation Integrated Search

    1999-04-01

    Safety at unsignalized intersections is a major concern. Intersection collisions are one of the most common types of crash, and in the United States, they account for nearly 2 million accidents and 6,700 fatalities every year. However, a fully signal...

  13. Driver eye-scanning behavior at intersections at night.

    DOT National Transportation Integrated Search

    2009-10-01

    This research project analyzed drivers eye scanning behavior at night when approaching signalized : and unsignalized intersections using the data from a head-mounted eye-tracking system during open road : driving on a prescribed route. During the ...

  14. Left-turn lanes at unsignalized median openings.

    DOT National Transportation Integrated Search

    2014-03-01

    Due to the frequent presence of median openings in urban arterial settings, the requirements for the deceleration and : storage of turning vehicles (e.g. AASHTO Green Book) often exceed the available length between two adjacent : openings which leave...

  15. The effects of mobile phone use on pedestrian crossing behaviour at signalized and unsignalized intersections.

    PubMed

    Hatfield, Julie; Murphy, Susanne

    2007-01-01

    Research amongst drivers suggests that pedestrians using mobile telephones may behave riskily while crossing the road, and casual observation suggests concerning levels of pedestrian mobile-use. An observational field survey of 270 females and 276 males was conducted to compare the safety of crossing behaviours for pedestrians using, versus not using, a mobile phone. Amongst females, pedestrians who crossed while talking on a mobile phone crossed more slowly, and were less likely to look at traffic before starting to cross, to wait for traffic to stop, or to look at traffic while crossing, compared to matched controls. For males, pedestrians who crossed while talking on a mobile phone crossed more slowly at unsignalized crossings. These effects suggest that talking on a mobile phone is associated with cognitive distraction that may undermine pedestrian safety. Messages explicitly suggesting techniques for avoiding mobile-use while road crossing may benefit pedestrian safety.

  16. Gradient-based multiconfiguration Shepard interpolation for generating potential energy surfaces for polyatomic reactions.

    PubMed

    Tishchenko, Oksana; Truhlar, Donald G

    2010-02-28

    This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.

  17. Higher-order aberrations and best-corrected visual acuity in Native American children with a high prevalence of astigmatism

    PubMed Central

    Miller, Joseph M.; Harvey, Erin M.; Schwiegerling, Jim

    2016-01-01

    Purpose To determine whether higher-order aberrations (HOAs) in children from a highly astigmatic population differ from population norms and whether HOAs are associated with astigmatism and reduced best-corrected visual acuity. Methods Subjects were 218 Tohono O’odham Native American children 5–9 years of age. Noncycloplegic HOA measurements were obtained with a handheld Shack-Hartmann sensor (SHS). Signed (z06s to z14s) and unsigned (z06u to z14u) wavefront aberration Zernike coefficients Z(3,−3) to Z(4,4) were rescaled for a 4 mm diameter pupil and compared to adult population norms. Cycloplegic refraction and best-corrected logMAR letter visual acuity (BCVA) were also measured. Regression analyses assessed the contribution of astigmatism (J0) and HOAs to BCVA. Results The mean root-mean-square (RMS) HOA of 0.191 ± 0.072 μm was significantly greater than population norms (0.100 ± 0.044 μm. All unsigned HOA coefficients (z06u to z14u) and all signed coefficients except z09s, z10s, and z11s were significantly larger than population norms. Decreased BCVA was associated with astigmatism (J0) and spherical aberration (z12u) but not RMS coma, with the effect of J0 about 4 times as great as z12u. Conclusions Tohono O’odham children show elevated HOAs compared to population norms. Astigmatism and unsigned spherical aberration are associated with decreased acuity, but the effects of spherical aberration are minimal and not clinically significant. PMID:26239206

  18. The role of 'jackpot' stimuli in maladaptive decision-making: dissociable effects of D1/D2 receptor agonists and antagonists.

    PubMed

    Smith, Aaron P; Hofford, Rebecca S; Zentall, Thomas R; Beckmann, Joshua S

    2018-05-01

    Laboratory experiments often model risk through a choice between a large, uncertain (LU) reward against a small, certain (SC) reward as an index of an individual's risk tolerance. An important factor generally lacking from these procedures are reward-associated cues that may modulate risk preferences. We tested whether the addition of cues signaling 'jackpot' wins to LU choices would modulate risk preferences and if these cue effects were mediated by dopaminergic signaling. Three groups of rats chose between LU and SC rewards for which the LU probability of reward decreased across blocks. The unsignaled group received a non-informative stimulus of trial outcome. The signaled group received a jackpot signal prior to reward delivery and blackout on losses. The signaled-light group received a similar jackpot for wins, but a salient loss signal distinct from the win signal. Presenting win signals decreased the discounting of LU value for both signaled groups regardless of loss signal, while the unsignaled group showed discounting similar to previous research without cues. Pharmacological challenges with D1/D2 agonists and antagonists revealed that D1 antagonism increased and decreased sensitives to the relative probability of reward for unsignaled and signaled groups, respectively, while D2 agonists decreased sensitivities to the relative magnitude of reward. The results highlight how signals predictive of wins can promote maladaptive risk taking in individuals, while loss signals have reduced effect. Additionally, the presence of reward-predictive cues may change the underlying neurobehavioral mechanisms mediating decision-making under risk.

  19. An Exploration of the Emission Properties of X-Ray Bright Points Seen with SDO

    NASA Technical Reports Server (NTRS)

    Saar, S. H.; Elsden, T.; Muglach, K.

    2012-01-01

    We present preliminary results of a study of X-ray Bright Point (XBP) EUV emission and its dependence on other properties. The XBPs were located using a new, automated XBP finder for AlA developed as part of the Feature Finding Team for SDO Computer Vision. We analyze XBPs near disk center, comparing AlA EUV fluxes, HMI LOS magnetic fields, and photospheric flow fields (derived from HMI data) to look for relationships between XBP emission, magnetic flux, velocity fields, and XBP local environment. We find some evidence for differences in the mean XBP temperature with environment. Unsigned magnetic flux is correlated with XBP emission, though other parameters play a role. The majority of XBP footpoints are approaching each other, though at a slight angle from head-on on average. We discuss the results in the context of XBP heating.

  20. Traffic operational evaluation of traffic impact analysis (TIA) case sites.

    DOT National Transportation Integrated Search

    2010-09-22

    This report summarizes traffic operational evaluation of six select traffic impact analysis (TIA) case sites and the effectiveness of forecasting methods used in TIA studies. Six TIA case sites comprising 15 signalized intersections and 2 unsignalize...

  1. Empirically-based performance assessment & simulation of pedestrian behavior at unsignalized crossings.

    DOT National Transportation Integrated Search

    2014-09-01

    The objective of this research was to provide an improved understanding of pedestrian-vehicle interaction : at mid-block pedestrian crossings and develop methods that can be used in traffic operational analysis and : microsimulation packages. Models ...

  2. DNS Rebinding Attacks

    DTIC Science & Technology

    2009-09-01

    active scripting, file downloads, installation of desktop items, signed and unsigned ActiveX controls, Java permissions, launching applications and...files in an IFRAME, running ActiveX controls and plug-ins, and scripting of Java applets [49]. This security measure is very effective against DNS

  3. Roadway lighting and safety : phase II--monitoring quality, durability and efficiency.

    DOT National Transportation Integrated Search

    2011-10-01

    This Phase II project follows a previous project titled Strategies to Address Nighttime Crashes at Rural, Unsignalized Intersections. Based on the results of the previous study, the Iowa Highway Research Board (IHRB) indicated interest in pursuing fu...

  4. Public roads : a journal of highway research. Vol. 25, No. 10

    DOT National Transportation Integrated Search

    1949-10-01

    In this issue of Public Roads appears the first portion of an important work on highway capacity and its practical applications. The second half of the report, dealing with intersections at grade, weaving sections and unsignalized cross movements, ra...

  5. Safety, operational, and energy impacts of in-vehicle adaptive stop displays using connected vehicle technology.

    DOT National Transportation Integrated Search

    2015-07-01

    Un-signalized intersections create multiple opportunities for missed or misunderstood information. : Stop sign-controlled intersections have also been shown to be a source of delay and emissions due : to their frequent, often inappropriate use. By us...

  6. Signed weighted gene co-expression network analysis of transcriptional regulation in murine embryonic stem cells

    PubMed Central

    Mason, Mike J; Fan, Guoping; Plath, Kathrin; Zhou, Qing; Horvath, Steve

    2009-01-01

    Background Recent work has revealed that a core group of transcription factors (TFs) regulates the key characteristics of embryonic stem (ES) cells: pluripotency and self-renewal. Current efforts focus on identifying genes that play important roles in maintaining pluripotency and self-renewal in ES cells and aim to understand the interactions among these genes. To that end, we investigated the use of unsigned and signed network analysis to identify pluripotency and differentiation related genes. Results We show that signed networks provide a better systems level understanding of the regulatory mechanisms of ES cells than unsigned networks, using two independent murine ES cell expression data sets. Specifically, using signed weighted gene co-expression network analysis (WGCNA), we found a pluripotency module and a differentiation module, which are not identified in unsigned networks. We confirmed the importance of these modules by incorporating genome-wide TF binding data for key ES cell regulators. Interestingly, we find that the pluripotency module is enriched with genes related to DNA damage repair and mitochondrial function in addition to transcriptional regulation. Using a connectivity measure of module membership, we not only identify known regulators of ES cells but also show that Mrpl15, Msh6, Nrf1, Nup133, Ppif, Rbpj, Sh3gl2, and Zfp39, among other genes, have important roles in maintaining ES cell pluripotency and self-renewal. We also report highly significant relationships between module membership and epigenetic modifications (histone modifications and promoter CpG methylation status), which are known to play a role in controlling gene expression during ES cell self-renewal and differentiation. Conclusion Our systems biologic re-analysis of gene expression, transcription factor binding, epigenetic and gene ontology data provides a novel integrative view of ES cell biology. PMID:19619308

  7. Evaluating Parametrization Protocols for Hydration Free Energy Calculations with the AMOEBA Polarizable Force Field.

    PubMed

    Bradshaw, Richard T; Essex, Jonathan W

    2016-08-09

    Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.

  8. Atomic Charge Parameters for the Finite Difference Poisson-Boltzmann Method Using Electronegativity Neutralization.

    PubMed

    Yang, Qingyi; Sharp, Kim A

    2006-07-01

    An optimization of Rappe and Goddard's charge equilibration (QEq) method of assigning atomic partial charges is described. This optimization is designed for fast and accurate calculation of solvation free energies using the finite difference Poisson-Boltzmann (FDPB) method. The optimization is performed against experimental small molecule solvation free energies using the FDPB method and adjusting Rappe and Goddard's atomic electronegativity values. Using a test set of compounds for which experimental solvation energies are available and a rather small number of parameters, very good agreement was obtained with experiment, with a mean unsigned error of about 0.5 kcal/mol. The QEq atomic partial charge assignment method can reflect the effects of the conformational changes and solvent induction on charge distribution in molecules. In the second section of the paper we examined this feature with a study of the alanine dipeptide conformations in water solvent. The different contributions to the energy surface of the dipeptide were examined and compared with the results from fixed CHARMm charge potential, which is widely used for molecular dynamics studies.

  9. Differential Item Functioning Detection Across Two Methods of Defining Group Comparisons

    PubMed Central

    Sari, Halil Ibrahim

    2014-01-01

    This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF studies. In this study, a simulation was conducted based on data from a 60-item ACT Mathematics test (ACT; Hanson & Béguin). The unsigned area measure method (Raju) was used as the DIF detection method. An application to operational data was also completed in the study, as well as a comparison of observed Type I error rates and false discovery rates across the two methods of defining groups. Results indicate that the amount of flagged DIF or interpretations about DIF in all conditions were not the same across the two methods, and there may be some benefits to using composite group approaches. The results are discussed in connection to differing definitions of fairness. Recommendations for practice are made. PMID:29795837

  10. Resistance to Change and Relapse of Observing

    ERIC Educational Resources Information Center

    Thrailkill, Eric A.; Shahan, Timothy A.

    2012-01-01

    Four experiments examined relapse of extinguished observing behavior of pigeons using a two-component multiple schedule of observing-response procedures. In both components, unsignaled periods of variable-interval (VI) food reinforcement alternated with extinction and observing responses produced stimuli associated with the availability of the VI…

  11. Pedestrian crosswalk case studies : Sacramento, California; Richmond, Virginia; Buffalo, New York; Stillwater, Minnesota

    DOT National Transportation Integrated Search

    2001-08-01

    The objective of this research was to determine the effect of crosswalk markings on driver and pedestrian behavior at unsignalized intersections. A before/after evaluation of crosswalk markings was conducted at 11 locations in 4 U.S. cities. Behavior...

  12. Development of left-turn lane guidelines for signalized and unsignalized intersections.

    DOT National Transportation Integrated Search

    2004-01-01

    It is generally accepted that the level of service (LOS) at intersections significantly affects the overall LOS of the road system. It is also known that the LOS at an intersection can be adversely affected by frequently allowing left-turning vehicle...

  13. L&D Manual Turn Lane Storage Validation/Update

    DOT National Transportation Integrated Search

    2012-08-01

    Queuing occurs at intersections mostly due to overflow or inadequacy of turn bays. The ODOT L&D : Manual Volume 1 has storage requirements for both signalized and unsignalized intersections. Figures : 401-9E and 401-10E of the L&D Manual provide the ...

  14. Signed-negabinary-arithmetic-based optical computing by use of a single liquid-crystal-display panel.

    PubMed

    Datta, Asit K; Munshi, Soumika

    2002-03-10

    Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.

  15. Generalization of Clustering Coefficients to Signed Correlation Networks

    PubMed Central

    Costantini, Giulio; Perugini, Marco

    2014-01-01

    The recent interest in network analysis applications in personality psychology and psychopathology has put forward new methodological challenges. Personality and psychopathology networks are typically based on correlation matrices and therefore include both positive and negative edge signs. However, some applications of network analysis disregard negative edges, such as computing clustering coefficients. In this contribution, we illustrate the importance of the distinction between positive and negative edges in networks based on correlation matrices. The clustering coefficient is generalized to signed correlation networks: three new indices are introduced that take edge signs into account, each derived from an existing and widely used formula. The performances of the new indices are illustrated and compared with the performances of the unsigned indices, both on a signed simulated network and on a signed network based on actual personality psychology data. The results show that the new indices are more resistant to sample variations in correlation networks and therefore have higher convergence compared with the unsigned indices both in simulated networks and with real data. PMID:24586367

  16. Estimation of Critical Gap Based on Raff's Definition

    PubMed Central

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result. PMID:25574160

  17. Estimation of critical gap based on Raff's definition.

    PubMed

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.

  18. Conditioned Reinforcement Value and Resistance to Change

    ERIC Educational Resources Information Center

    Shahan, Timothy A.; Podlesnik, Christopher A.

    2008-01-01

    Three experiments examined the effects of conditioned reinforcement value and primary reinforcement rate on resistance to change using a multiple schedule of observing-response procedures with pigeons. In the absence of observing responses in both components, unsignaled periods of variable-interval (VI) schedule food reinforcement alternated with…

  19. The Fail-Proof Student

    ERIC Educational Resources Information Center

    Fiamengo, Janice

    2013-01-01

    In this article, the author comments on an unsigned newspaper piece titled "Helping Talent Rise to the Top," printed in Canada's "Globe and Mail" about a new measure to enhance student well-being at Queen's University in Kingston, Ontario. The "Globe" piece lauds Queen's, a top-ranked Canadian undergraduate school,…

  20. Localized orbital corrections applied to thermochemical errors in density functional theory: The role of basis set and application to molecular reactions

    NASA Astrophysics Data System (ADS)

    Goldfeld, Dahlia A.; Bochevarov, Arteum D.; Friesner, Richard A.

    2008-12-01

    This paper is a logical continuation of the 22 parameter, localized orbital correction (LOC) methodology that we developed in previous papers [R. A. Friesner et al., J. Chem. Phys. 125, 124107 (2006); E. H. Knoll and R. A. Friesner, J. Phys. Chem. B 110, 18787 (2006).] This methodology allows one to redress systematic density functional theory (DFT) errors, rooted in DFT's inherent inability to accurately describe nondynamical correlation. Variants of the LOC scheme, in conjunction with B3LYP (denoted as B3LYP-LOC), were previously applied to enthalpies of formation, ionization potentials, and electron affinities and showed impressive reduction in the errors. In this paper, we demonstrate for the first time that the B3LYP-LOC scheme is robust across different basis sets [6-31G∗, 6-311++G(3df,3pd), cc-pVTZ, and aug-cc-pVTZ] and reaction types (atomization reactions and molecular reactions). For example, for a test set of 70 molecular reactions, the LOC scheme reduces their mean unsigned error from 4.7 kcal/mol [obtained with B3LYP/6-311++G(3df,3pd)] to 0.8 kcal/mol. We also verified whether the LOC methodology would be equally successful if applied to the promising M05-2X functional. We conclude that although M05-2X produces better reaction enthalpies than B3LYP, the LOC scheme does not combine nearly as successfully with M05-2X than with B3LYP. A brief analysis of another functional, M06-2X, reveals that it is more accurate than M05-2X but its combination with LOC still cannot compete in accuracy with B3LYP-LOC. Indeed, B3LYP-LOC remains the best method of computing reaction enthalpies.

  1. Evaluating the Effect of Advance Yield Markings and Symbolic Signs on Vehicle-Pedestrian Conflicts at Marked Midblock Crosswalks across Multilane Roads

    DOT National Transportation Integrated Search

    2016-02-01

    The Commonwealth of Massachusetts has made walkable communities a priority. Pedestrian safety is key to the success of this objective. Pedestrians are at high risk when traversing unsignalized, marked crosswalks located either midblock or at T inters...

  2. Evaluating the effect of advance yield markings and symbolic signs on vehicle-pedestrian conflicts at marked midblock crosswalks across multilane roads.

    DOT National Transportation Integrated Search

    2016-02-01

    The Commonwealth of Massachusetts has made walkable communities a priority. Pedestrian safety is key to the success of : this objective. Pedestrians are at high risk when traversing unsignalized, marked crosswalks located either midblock or at Tinter...

  3. On Double-Entry Bookkeeping: The Mathematical Treatment

    ERIC Educational Resources Information Center

    Ellerman, David

    2014-01-01

    Double-entry bookkeeping (DEB) implicitly uses a specific mathematical construction, the group of differences using pairs of unsigned numbers ("T-accounts"). That construction was only formulated abstractly in mathematics in the nineteenth century, even though DEB had been used in the business world for over five centuries. Yet the…

  4. Signed vs. Unsigned Report of Depression and Self-Esteem.

    ERIC Educational Resources Information Center

    Nolan, R. F; And Others

    1994-01-01

    One hundred thirty-five adolescents were administered the Children's Depression Inventory (CDI) and the Coopersmith Self-Esteem Inventory (CSEI). On the CDI, male adolescents responded more severely on an item involving fighting with others when they could be identified. There were no significant differences among responses on CSEI items.…

  5. Do Conditional Reinforcers Count?

    ERIC Educational Resources Information Center

    Davison, Michael; Baum, William M.

    2006-01-01

    Six pigeons were trained on a procedure in which seven components arranged different food-delivery ratios on concurrent variable-interval schedules each session. The components were unsignaled, lasted for 10 food deliveries, and occurred in random order with a 60-s blackout between components. The schedules were arranged using a switching-key…

  6. Electronic couplings for molecular charge transfer: Benchmarking CDFT, FODFT, and FODFTB against high-level ab initio calculations

    NASA Astrophysics Data System (ADS)

    Kubas, Adam; Hoffmann, Felix; Heck, Alexander; Oberhofer, Harald; Elstner, Marcus; Blumberger, Jochen

    2014-03-01

    We introduce a database (HAB11) of electronic coupling matrix elements (Hab) for electron transfer in 11 π-conjugated organic homo-dimer cations. High-level ab inito calculations at the multireference configuration interaction MRCI+Q level of theory, n-electron valence state perturbation theory NEVPT2, and (spin-component scaled) approximate coupled cluster model (SCS)-CC2 are reported for this database to assess the performance of three DFT methods of decreasing computational cost, including constrained density functional theory (CDFT), fragment-orbital DFT (FODFT), and self-consistent charge density functional tight-binding (FODFTB). We find that the CDFT approach in combination with a modified PBE functional containing 50% Hartree-Fock exchange gives best results for absolute Hab values (mean relative unsigned error = 5.3%) and exponential distance decay constants β (4.3%). CDFT in combination with pure PBE overestimates couplings by 38.7% due to a too diffuse excess charge distribution, whereas the economic FODFT and highly cost-effective FODFTB methods underestimate couplings by 37.6% and 42.4%, respectively, due to neglect of interaction between donor and acceptor. The errors are systematic, however, and can be significantly reduced by applying a uniform scaling factor for each method. Applications to dimers outside the database, specifically rotated thiophene dimers and larger acenes up to pentacene, suggests that the same scaling procedure significantly improves the FODFT and FODFTB results for larger π-conjugated systems relevant to organic semiconductors and DNA.

  7. Do Solvated Electrons (e(aq)⁻) Reduce DNA Bases? A Gaussian 4 and Density Functional Theory-Molecular Dynamics Study.

    PubMed

    Kumar, Anil; Adhikary, Amitava; Shamoun, Lance; Sevilla, Michael D

    2016-03-10

    The solvated electron (e(aq)⁻) is a primary intermediate after an ionization event that produces reductive DNA damage. Accurate standard redox potentials (E(o)) of nucleobases and of e(aq)⁻ determine the extent of reaction of e(aq)⁻ with nucleobases. In this work, E(o) values of e(aq)⁻ and of nucleobases have been calculated employing the accurate ab initio Gaussian 4 theory including the polarizable continuum model (PCM). The Gaussian 4-calculated E(o) of e(aq)⁻ (-2.86 V) is in excellent agreement with the experimental one (-2.87 V). The Gaussian 4-calculated E(o) of nucleobases in dimethylformamide (DMF) lie in the range (-2.36 V to -2.86 V); they are in reasonable agreement with the experimental E(o) in DMF and have a mean unsigned error (MUE) = 0.22 V. However, inclusion of specific water molecules reduces this error significantly (MUE = 0.07). With the use of a model of e(aq)⁻ nucleobase complex with six water molecules, the reaction of e(aq)⁻ with the adjacent nucleobase is investigated using approximate ab initio molecular dynamics (MD) simulations including PCM. Our MD simulations show that e(aq)⁻ transfers to uracil, thymine, cytosine, and adenine, within 10 to 120 fs and e(aq)⁻ reacts with guanine only when a water molecule forms a hydrogen bond to O6 of guanine which stabilizes the anion radical.

  8. Effect Size Measures for Differential Item Functioning in a Multidimensional IRT Model

    ERIC Educational Resources Information Center

    Suh, Youngsuk

    2016-01-01

    This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…

  9. 17 CFR 270.8b-11 - Number of copies; signatures; binding.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Number of copies; signatures... (CONTINUED) RULES AND REGULATIONS, INVESTMENT COMPANY ACT OF 1940 § 270.8b-11 Number of copies; signatures... manner prescribed by the appropriate form. Unsigned copies shall be conformed. If the signature of any...

  10. Kids and Chemistry: Large Event Guide.

    ERIC Educational Resources Information Center

    Tinnesand, Michael

    This guide is intended to provide Kids and Chemistry (K&C) with a variety of age-appropriate, fun, and safe demonstrations. It features information on planning a large event and includes safety guidelines. Several activities are included under each major topic. Topics include: (1) Acids and Bases; (2) Unsigned; (3) Kool Tie-Dye; (4) Secret…

  11. 20 CFR 418.1270 - What modified adjusted gross income evidence will we not accept?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What modified adjusted gross income evidence will we not accept? 418.1270 Section 418.1270 Employees' Benefits SOCIAL SECURITY ADMINISTRATION... letter from IRS acknowledging the change. We will also not accept illegible or unsigned copies of income...

  12. A Comparison of Lord's Chi Square and Raju's Area Measures in Detection of DIF.

    ERIC Educational Resources Information Center

    Cohen, Allan S.; Kim, Seock-Ho

    1993-01-01

    The effectiveness of two statistical tests of the area between item response functions (exact signed area and exact unsigned area) estimated in different samples, a measure of differential item functioning (DIF), was compared with Lord's chi square. Lord's chi square was found the most effective in determining DIF. (SLD)

  13. Landscaping of highway medians and roadway safety at unsignalized intersections.

    PubMed

    Chen, Hongyun; Fabregas, Aldo; Lin, Pei-Sung

    2016-05-01

    Well-planted and maintained landscaping can help reduce driving stress, provide better visual quality, and decrease over speeding, thus improving roadway safety. Florida Department of Transportation (FDOT) Standard Index (SI-546) is one of the more demanding standards in the U.S. for landscaping design criteria at highway medians near intersections. The purposes of this study were to (1) empirically evaluate the safety results of SI-546 at unsignalized intersections and (2) quantify the impacts of geometrics, traffic, and landscaping design features on total crashes and injury plus fatal crashes. The studied unsignalized intersections were divided into (1) those without median trees near intersections, (2) those with median trees near intersections that were compliant with SI-546, and (3) those with median trees near intersections that were non-compliant with SI-546. A total of 72 intersections were selected, for which five-year crash data from 2006-2010 were collected. The sites that were compliant with SI-546 showed the best safety performance in terms of the lowest crash counts and crash rates. Four crash predictive models-two for total crashes and two for injury crashes-were developed. The results indicated that improperly planted and maintained median trees near highway intersections can increase the total number of crashes and injury plus fatal crashes at a 90% confidence level; no significant difference could be found in crash rates between sites that were compliant with SI-546 and sites without trees. All other conditions remaining the same, an intersection with trees that was not compliant with SI-546 had 63% more crashes and almost doubled injury plus fatal crashes than those at intersections without trees. The study indicates that appropriate landscaping in highway medians near intersections can be an engineering technology that not only improves roadway environmental quality but also maintains intersection safety. Copyright © 2016. Published by Elsevier Ltd.

  14. An Update on the Role of Serotonin and its Interplay with Dopamine for Reward.

    PubMed

    Fischer, Adrian G; Ullsperger, Markus

    2017-01-01

    The specific role of serotonin and its interplay with dopamine (DA) in adaptive, reward guided behavior as well as drug dependance, still remains elusive. Recently, novel methods allowed cell type specific anatomical, functional and interventional analyses of serotonergic and dopaminergic circuits, promising significant advancement in understanding their functional roles. Furthermore, it is increasingly recognized that co-release of neurotransmitters is functionally relevant, understanding of which is required in order to interpret results of pharmacological studies and their relationship to neural recordings. Here, we review recent animal studies employing such techniques with the aim to connect their results to effects observed in human pharmacological studies and subjective effects of drugs. It appears that the additive effect of serotonin and DA conveys significant reward related information and is subjectively highly euphorizing. Neither DA nor serotonin alone have such an effect. This coincides with optogenetically targeted recordings in mice, where the dopaminergic system codes reward prediction errors (PE), and the serotonergic system mainly unsigned PE. Overall, this pattern of results indicates that joint activity between both systems carries essential reward information and invites parallel investigation of both neurotransmitter systems.

  15. Multiconfiguration Pair-Density Functional Theory and Complete Active Space Second Order Perturbation Theory. Bond Dissociation Energies of FeC, NiC, FeS, NiS, FeSe, and NiSe.

    PubMed

    Sharkas, Kamal; Gagliardi, Laura; Truhlar, Donald G

    2017-12-07

    We investigate the performance of multiconfiguration pair-density functional theory (MC-PDFT) and complete active space second-order perturbation theory for computing the bond dissociation energies of the diatomic molecules FeC, NiC, FeS, NiS, FeSe, and NiSe, for which accurate experimental data have become recently available [Matthew, D. J.; Tieu, E.; Morse, M. D. J. Chem. Phys. 2017, 146, 144310-144320]. We use three correlated participating orbital (CPO) schemes (nominal, moderate, and extended) to define the active spaces, and we consider both the complete active space (CAS) and the separated-pair (SP) schemes to specify the configurations included for a given active space. We found that the moderate SP-PDFT scheme with the tPBE on-top density functional has the smallest mean unsigned error (MUE) of the methods considered. This level of theory provides a balanced treatment of the static and dynamic correlation energies for the studied systems. This is encouraging because the method is low in cost even for much more complicated systems.

  16. Multiconfiguration Pair-Density Functional Theory Spectral Calculations Are Stable to Adding Diffuse Basis Functions.

    PubMed

    Hoyer, Chad E; Gagliardi, Laura; Truhlar, Donald G

    2015-11-05

    Time-dependent Kohn-Sham density functional theory (TD-KS-DFT) is useful for calculating electronic excitation spectra of large systems, but the low-energy spectra are often complicated by artificially lowered higher-energy states. This affects even the lowest energy excited states. Here, by calculating the lowest energy spin-conserving excited state for atoms from H to K and for formaldehyde, we show that this problem does not occur in multiconfiguration pair-density functional theory (MC-PDFT). We use the tPBE on-top density functional, which is a translation of the PBE exchange-correlation functional. We compare to a robust multireference method, namely, complete active space second-order perturbation theory (CASPT2), and to TD-KS-DFT with two popular exchange-correlation functionals, PBE and PBE0. We find for atoms that the mean unsigned error (MUE) of MC-PDFT with the tPBE functional improves from 0.42 to 0.40 eV with a double set of diffuse functions, whereas the MUEs for PBE and PBE0 drastically increase from 0.74 to 2.49 eV and from 0.45 to 1.47 eV, respectively.

  17. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  18. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  19. Stimulus Effects on Local Preference: Stimulus-Response Contingencies, Stimulus-Food Pairing, and Stimulus-Food Correlation

    ERIC Educational Resources Information Center

    Davison, Michael; Baum, William M.

    2010-01-01

    Four pigeons were trained in a procedure in which concurrent-schedule food ratios changed unpredictably across seven unsignaled components after 10 food deliveries. Additional green-key stimulus presentations also occurred on the two alternatives, sometimes in the same ratio as the component food ratio, and sometimes in the inverse ratio. In eight…

  20. Evaluation of Fixed Momentary DRO Schedules under Signaled and Unsignaled Arrangements

    ERIC Educational Resources Information Center

    Hammond, Jennifer L.; Iwata, Brian A.; Fritz, Jennifer N.; Dempsey, Carrie M.

    2011-01-01

    Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response…

  1. Relapse of Extinguished Fear after Exposure to a Dangerous Context Is Mitigated by Testing in a Safe Context

    ERIC Educational Resources Information Center

    Goode, Travis D.; Kim, Janice J.; Maren, Stephen

    2015-01-01

    Aversive events can trigger relapse of extinguished fear memories, presenting a major challenge to the long-term efficacy of therapeutic interventions. Here, we examined factors regulating the relapse of extinguished fear after exposure of rats to a dangerous context. Rats received unsignaled shock in a distinct context ("dangerous"…

  2. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  3. Fragment-orbital tunneling currents and electronic couplings for analysis of molecular charge-transfer systems.

    PubMed

    Hwang, Sang-Yeon; Kim, Jaewook; Kim, Woo Youn

    2018-04-04

    In theoretical charge-transfer research, calculation of the electronic coupling element is crucial for examining the degree of the electronic donor-acceptor interaction. The tunneling current (TC), representing the magnitudes and directions of electron flow, provides a way of evaluating electronic couplings, along with the ability of visualizing how electrons flow in systems. Here, we applied the TC theory to π-conjugated organic dimer systems, in the form of our fragment-orbital tunneling current (FOTC) method, which uses the frontier molecular-orbitals of system fragments as diabatic states. For a comprehensive test of FOTC, we assessed how reasonable the computed electronic couplings and the corresponding TC densities are for the hole- and electron-transfer databases HAB11 and HAB7. FOTC gave 12.5% mean relative unsigned error with regard to the high-level ab initio reference. The shown performance is comparable with that of fragment-orbital density functional theory, which gave the same error by 20.6% or 13.9% depending on the formulation. In the test of a set of nucleobase π stacks, we showed that the original TC expression is also applicable to nondegenerate cases under the condition that the overlap between the charge distributions of diabatic states is small enough to offset the energy difference. Lastly, we carried out visual analysis on the FOTC densities of thiophene dimers with different intermolecular alignments. The result depicts an intimate topological connection between the system geometry and electron flow. Our work provides quantitative and qualitative grounds for FOTC, showing it to be a versatile tool in characterization of molecular charge-transfer systems.

  4. Passive quantum error correction of linear optics networks through error averaging

    NASA Astrophysics Data System (ADS)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  5. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  6. Let's talk it over : interagency cooperation facilities success : a case study : the New York, New Jersey, Connecticut metropolitan area TRANSMIT operational test : ensuring integration of intelligent transportation systems products and services

    DOT National Transportation Integrated Search

    1989-01-01

    This manual provides basic background information and step-by-step procedures for conducting traffic conflict surveys at signalized and unsignalized intersections. The manual was prepared as a training aid and reference source for persons who are ass...

  7. NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4

    DTIC Science & Technology

    2012-09-26

    defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets

  8. Impact of repeated intravenous cocaine administration on incentive motivation depends on mode of drug delivery.

    PubMed

    LeBlanc, Kimberly H; Maidment, Nigel T; Ostlund, Sean B

    2014-11-01

    The incentive sensitization theory of addiction posits that repeated exposure to drugs of abuse, like cocaine, can lead to long-term adaptations in the neural circuits that support motivated behavior, providing an account of pathological drug-seeking behavior. Although pre-clinical findings provide strong support for this theory, much remains unknown about the conditions that support incentive sensitization. The current study examined whether the mode of cocaine administration is an important factor governing that drug's long-term impact on behavior. Separate groups of rats were allowed either to self-administer intravenous cocaine or were given an equivalent number and distribution of unsignaled cocaine or saline infusions. During the subsequent test of incentive motivation (Pavlovian-to-instrumental transfer), we found that rats with a history of cocaine self-administration showed strong cue-evoked food seeking, in contrast to rats given unsignaled cocaine or saline. This finding indicates that the manner in which cocaine is administered can determine its lasting behavioral effects, suggesting that subjective experiences during drug use play a critical role in the addiction process. Our findings may therefore have important implications for the study and treatment of compulsive drug seeking. © 2013 Society for the Study of Addiction.

  9. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    PubMed

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  10. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders.

    PubMed

    Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise

    2013-05-01

    To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.

  11. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders

    PubMed Central

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2012-01-01

    Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137

  12. Analysis of fast boundary-integral approximations for modeling electrostatic contributions of molecular binding

    PubMed Central

    Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.

    2013-01-01

    We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561

  13. Prediction of the translocon-mediated membrane insertion free energies of protein sequences.

    PubMed

    Park, Yungki; Helms, Volkhard

    2008-05-15

    Helical membrane proteins (HMPs) play crucial roles in a variety of cellular processes. Unlike water-soluble proteins, HMPs need not only to fold but also get inserted into the membrane to be fully functional. This process of membrane insertion is mediated by the translocon complex. Thus, it is of great interest to develop computational methods for predicting the translocon-mediated membrane insertion free energies of protein sequences. We have developed Membrane Insertion (MINS), a novel sequence-based computational method for predicting the membrane insertion free energies of protein sequences. A benchmark test gives a correlation coefficient of 0.74 between predicted and observed free energies for 357 known cases, which corresponds to a mean unsigned error of 0.41 kcal/mol. These results are significantly better than those obtained by traditional hydropathy analysis. Moreover, the ability of MINS to reasonably predict membrane insertion free energies of protein sequences allows for effective identification of transmembrane (TM) segments. Subsequently, MINS was applied to predict the membrane insertion free energies of 316 TM segments found in known structures. An in-depth analysis of the predicted free energies reveals a number of interesting findings about the biogenesis and structural stability of HMPs. A web server for MINS is available at http://service.bioinformatik.uni-saarland.de/mins

  14. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  15. Determination of partial molar volumes from free energy perturbation theory†

    PubMed Central

    Vilseck, Jonah Z.; Tirado-Rives, Julian

    2016-01-01

    Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood–Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm3 mol−1. The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute–solvent interactions. PMID:25589343

  16. Determination of partial molar volumes from free energy perturbation theory.

    PubMed

    Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L

    2015-04-07

    Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood-Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm(3) mol(-1). The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute-solvent interactions.

  17. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Optimal Multiple Surface Segmentation With Shape and Context Priors

    PubMed Central

    Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong

    2014-01-01

    Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309

  19. Predicting the photoinduced electron transfer thermodynamics in polyfluorinated 1,3,5-triarylpyrazolines based on multiple linear free energy relationships†

    PubMed Central

    Verma, Manjusha; Chaudhry, Aneese F.; Fahrni, Christoph J.

    2010-01-01

    The photophysical properties of 1,3,5-triarylpyrazolines are strongly influenced by the nature and position of substituents attached to the aryl-rings, rendering this fluorophore platform well suited for the design of fluorescent probes utilizing a photoinduced electron transfer (PET) switching mechanism. To explore the tunability of two key parameters that govern the PET thermodynamics, the excited state energy ΔE00 and acceptor potential E(A/A−), a library of polyfluoro-substituted 1,3-diaryl-5-phenyl-pyrazolines was synthesized and characterized. The observed trends for the PET parameters were effectively captured through multiple Hammett linear free energy relationships (LFER) using a set of independent substituent constants for each of the two aryl rings. Given the lack of experimental Hammett constants for polyfluoro substituted aromatics, theoretically derived constants based on the electrostatic potential at the nucleus (EPN) of carbon atoms were employed as quantum chemical descriptors. The performance of the LFER was evaluated with a set of compounds that were not included in the training set, yielding a mean unsigned error of 0.05 eV for the prediction of the combined PET parameters. The outlined LFER approach should be well suited to design and optimize the performance of cation-responsive 1,3,5-triarylpyrazolines. PMID:19343239

  20. Differential effects of galvanic vestibular stimulation on arm position sense in right- vs. left-handers.

    PubMed

    Schmidt, Lena; Artinger, Frank; Stumpf, Oliver; Kerkhoff, Georg

    2013-04-01

    The human brain is organized asymmetrically in two hemispheres with different functional specializations. Left- and right-handers differ in many functional capacities and their anatomical representations. Right-handers often show a stronger functional lateralization than left-handers, the latter showing a more bilateral, symmetrical brain organization. Recent functional imaging evidence shows a different lateralization of the cortical vestibular system towards the side of the preferred hand in left- vs. right-handers as well. Since the vestibular system is involved in somatosensory processing and the coding of body position, vestibular stimulation should affect such capacities differentially in left- vs. right-handers. In the present, sham-stimulation-controlled study we explored this hypothesis by studying the effects of galvanic vestibular stimulation (GVS) on proprioception in both forearms in left- and right-handers. Horizontal arm position sense (APS) was measured with an opto-electronic device. Second, the polarity-specific online- and after-effects of subsensory, bipolar GVS on APS were investigated in different sessions separately for both forearms. At baseline, both groups did not differ in their unsigned errors for both arms. However, right-handers showed significant directional errors in APS of both arms towards their own body. Right-cathodal/left-anodal GVS, resulting in right vestibular cortex activation, significantly deteriorated left APS in right-handers, but had no detectable effect on APS in left-handers in either arm. These findings are compatible with a right-hemisphere dominance for vestibular functions in right-handers and a differential vestibular organization in left-handers that compensates for the disturbing effects of GVS on APS. Moreover, our results show superior arm proprioception in left-handers in both forearms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Resolving Transition Metal Chemical Space: Feature Selection for Machine Learning and Structure-Property Relationships.

    PubMed

    Janet, Jon Paul; Kulik, Heather J

    2017-11-22

    Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.

  2. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Relapse of extinguished fear after exposure to a dangerous context is mitigated by testing in a safe context

    PubMed Central

    Goode, Travis D.; Kim, Janice J.

    2015-01-01

    Aversive events can trigger relapse of extinguished fear memories, presenting a major challenge to the long-term efficacy of therapeutic interventions. Here, we examined factors regulating the relapse of extinguished fear after exposure of rats to a dangerous context. Rats received unsignaled shock in a distinct context (“dangerous” context) 24 h prior to auditory fear conditioning in another context. Fear to the auditory conditioned stimulus (CS) was subsequently extinguished either in the conditioning context (“ambiguous” context) or in a third novel context (“safe” context). Exposure to the dangerous context 30 min before a CS retention test caused relapse to the CS in the ambiguous and safe test contexts relative to nonextinguished controls. When rats were tested 24 h later (with or without short-term testing), rats tested in the ambiguous context continued to exhibit relapse, whereas rats tested in the safe context did not. Additionally, exposure of rats to the conditioning context—in place of the unsignaled shock context—did not result in relapse of fear to the CS in the safe testing context. Our work highlights the vulnerabilities of extinction recall to interference, and demonstrates the importance of context associations in the relapse of fear after extinction. PMID:25691517

  4. Correlation between solar flare productivity and photospheric vector magnetic fields

    NASA Astrophysics Data System (ADS)

    Cui, Yanmei; Wang, Huaning

    2008-11-01

    Studying the statistical correlation between the solar flare productivity and photospheric magnetic fields is very important and necessary. It is helpful to set up a practical flare forecast model based on magnetic properties and improve the physical understanding of solar flare eruptions. In the previous study ([Cui, Y.M., Li, R., Zhang, L.Y., He, Y.L., Wang, H.N. Correlation between solar flare productivity and photospheric magnetic field properties 1. Maximum horizontal gradient, length of neutral line, number of singular points. Sol. Phys. 237, 45 59, 2006]; from now on we refer to this paper as ‘Paper I’), three measures of the maximum horizontal gradient, the length of the neutral line, and the number of singular points are computed from 23990 SOHO/MDI longitudinal magnetograms. The statistical relationship between the solar flare productivity and these three measures is well fitted with sigmoid functions. In the current work, the three measures of the length of strong-shear neutral line, total unsigned current, and total unsigned current helicity are computed from 1353 vector magnetograms observed at Huairou Solar Observing Station. The relationship between the solar flare productivity and the current three measures can also be well fitted with sigmoid functions. These results are expected to be beneficial to future operational flare forecasting models.

  5. Relapse of extinguished fear after exposure to a dangerous context is mitigated by testing in a safe context.

    PubMed

    Goode, Travis D; Kim, Janice J; Maren, Stephen

    2015-03-01

    Aversive events can trigger relapse of extinguished fear memories, presenting a major challenge to the long-term efficacy of therapeutic interventions. Here, we examined factors regulating the relapse of extinguished fear after exposure of rats to a dangerous context. Rats received unsignaled shock in a distinct context ("dangerous" context) 24 h prior to auditory fear conditioning in another context. Fear to the auditory conditioned stimulus (CS) was subsequently extinguished either in the conditioning context ("ambiguous" context) or in a third novel context ("safe" context). Exposure to the dangerous context 30 min before a CS retention test caused relapse to the CS in the ambiguous and safe test contexts relative to nonextinguished controls. When rats were tested 24 h later (with or without short-term testing), rats tested in the ambiguous context continued to exhibit relapse, whereas rats tested in the safe context did not. Additionally, exposure of rats to the conditioning context--in place of the unsignaled shock context--did not result in relapse of fear to the CS in the safe testing context. Our work highlights the vulnerabilities of extinction recall to interference, and demonstrates the importance of context associations in the relapse of fear after extinction. © 2015 Goode et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  7. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  8. One Time Passwords in Everything (OPIE): Experiences with Building and Using Stringer Authentication

    DTIC Science & Technology

    1995-01-01

    opiepasswd(1). The name change brings it more in line with its UNIX counterpart passwd (1), which should make both programs easier to remember for users. This...char * passwd ) int opiehash(char *x, unsigned algorithm) The one-time password schemes implemented in OPIE, as rst described in [Hal94], compute a...seed, passwd ); while (sequence-- != 0) opiehash(result, algorithm); opiebtoe(result,words); Send words. : : : 6 Deployment Every machine that has

  9. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  10. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  11. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  12. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    PubMed Central

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  13. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  14. Protein-ligand binding free energy estimation using molecular mechanics and continuum electrostatics. Application to HIV-1 protease inhibitors

    NASA Astrophysics Data System (ADS)

    Zoete, V.; Michielin, O.; Karplus, M.

    2003-12-01

    A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SAS bur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, ΔGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC 50 without reparametrization.

  15. Accurate Ionization Energies for Mononuclear Copper Complexes Remain a Challenge for Density Functional Theory.

    PubMed

    Dereli, Büsra; Ortuño, Manuel A; Cramer, Christopher J

    2018-04-17

    Copper is ubiquitous and its one-electron redox chemistry is central to many catalytic processes. Modeling such chemistry requires electronic structure methods capable of the accurate prediction of ionization energies (IEs) for compounds including copper in different oxidation states and supported by various ligands. Herein, we estimate IEs for 12 mononuclear Cu species previously reported in the literature by using 21 modern density functionals and the DLPNO-CCSD(T) wave function theory model; we consider extrapolated values of the latter to provide reference values of acceptable accuracy. Our results reveal a considerable diversity in functional performance. Although there is nearly always at least one functional that performs well for any given species, there are none that do so for every member of the test set, and certain cases are particularly pathological. Over the entire test set, the SOGGA11-X functional performs best with a mean unsigned error (MUE) of 0.22 eV. PBE0, ωB97X-D, CAM-B3LYP, M11-L, B3LYP, and M11 exhibit MUEs ranging between 0.23 and 0.34 eV. When including relativistic effects with the zero-order regular approximation, ωB97X-D, CAM-B3LYP, and PBE0 are found to provide the best accuracy. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Prediction suppression and surprise enhancement in monkey inferotemporal cortex.

    PubMed

    Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R

    2017-07-01

    Exposing monkeys, over the course of days and weeks, to pairs of images presented in fixed sequence, so that each leading image becomes a predictor for the corresponding trailing image, affects neuronal visual responsiveness in area TE. At the end of the training period, neurons respond relatively weakly to a trailing image when it appears in a trained sequence and, thus, confirms prediction, whereas they respond relatively strongly to the same image when it appears in an untrained sequence and, thus, violates prediction. This effect could arise from prediction suppression (reduced firing in response to the occurrence of a probable event) or surprise enhancement (elevated firing in response to the omission of a probable event). To identify its cause, we compared firing under the prediction-confirming and prediction-violating conditions to firing under a prediction-neutral condition. The results provide strong evidence for prediction suppression and limited evidence for surprise enhancement. NEW & NOTEWORTHY In predictive coding models of the visual system, neurons carry signed prediction error signals. We show here that monkey inferotemporal neurons exhibit prediction-modulated firing, as posited by these models, but that the signal is unsigned. The response to a prediction-confirming image is suppressed, and the response to a prediction-violating image may be enhanced. These results are better explained by a model in which the visual system emphasizes unpredicted events than by a predictive coding model. Copyright © 2017 the American Physiological Society.

  17. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kerber, A. G.; Sellers, P. J.

    1993-01-01

    Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.

  18. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  19. Investigation of geomagnetic field forecasting and fluid dynamics of the core

    NASA Technical Reports Server (NTRS)

    Benton, E. R. (Principal Investigator)

    1981-01-01

    The magnetic determination of the depth of the core-mantle boundary using MAGSAT data is discussed. Refinements to the approach of using the pole-strength of Earth to evaluate the radius of the Earth's core-mantle boundary are reported. The downward extrapolation through the electrically conducting mantle was reviewed. Estimates of an upper bound for the time required for Earth's liquid core to overturn completely are presented. High order analytic approximations to the unsigned magnetic flux crossing the Earth's surface are also presented.

  20. Far Forward Life Support System (FFLSS) Phase II

    DTIC Science & Technology

    2001-05-01

    button will clear audible, flashing , or constant red alarms. Pushing the Config button drops down to the Level 2 menu (bottom line of Figure 8). In...disabled. Again, with testing this audible noise can be disabled. The visual flashing or constant red cannot be disabled. Pushing Return goes back to Level...Calling PLOTPIXEL( x ,y) and PLOTLINE(xl,x2,yl,y2) is as simple as pushing the appropriate bytes (unsigned integers) onto the stack and jumping to the

  1. Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas

    USGS Publications Warehouse

    Puente, Celso

    1978-01-01

    The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.

  2. Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties

    NASA Astrophysics Data System (ADS)

    Hua, W.

    2017-12-01

    Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.

  3. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  4. WE-A-17A-03: Catheter Digitization in High-Dose-Rate Brachytherapy with the Assistance of An Electromagnetic (EM) Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, AL; Bhagwat, MS; Buzurovic, I

    Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less

  5. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  6. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  7. How Do Vision and Hearing Impact Pedestrian Time-to-Arrival Judgments?

    PubMed Central

    Roper, JulieAnne M.; Hassan, Shirin E.

    2014-01-01

    Purpose To determine how accurate normally-sighted male and female pedestrians were at making time-to-arrival (TTA) judgments of approaching vehicles when using just their hearing or both their hearing and vision. Methods Ten male and 14 female subjects with confirmed normal vision and hearing estimated the TTA of approaching vehicles along an unsignalized street under two sensory conditions: (i) using both habitual vision and hearing; and (ii) using habitual hearing only. All subjects estimated how long the approaching vehicle would take to reach them (ie the TTA). The actual TTA of vehicles was also measured using custom made sensors. The error in TTA judgments for each subject under each sensory condition was calculated as the difference between the actual and estimated TTA. A secondary timing experiment was also conducted to adjust each subject’s TTA judgments for their “internal metronome”. Results Error in TTA judgments changed significantly as a function of both the actual TTA (p<0.0001) and sensory condition (p<0.0001). While no main effect for gender was found (p=0.19), the way the TTA judgments varied within each sensory condition for each gender was different (p<0.0001). Females tended to be as accurate under either condition (p≥0.01) with the exception of TTA judgments made when the actual TTA was two seconds or less and eight seconds or longer, during which the vision and hearing condition was more accurate (p≤0.002). Males made more accurate TTA judgments under the hearing only condition for actual TTA values five seconds or less (p<0.0001), after which there were no significant differences between the two conditions (p≥0.01). Conclusions Our data suggests that males and females use visual and auditory information differently when making TTA judgments. While the sensory condition did not affect the females’ accuracy in judgments, males initially tended to be more accurate when using their hearing only. PMID:24509543

  8. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  9. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  10. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

  11. An extended car-following model with consideration of vehicle to vehicle communication of two conflicting streams

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Li, Peng

    2017-05-01

    In this paper, we propose a car-following model to explore the influences of V2V communication on the driving behavior at un-signalized intersections with two crossing streams and to explore how the speed guidance strategy affects the operation efficiency. The numerical results illustrate that the benefits of the guidance strategy could be enhanced by lengthening the guiding space range and increasing the maximum speed limitation, and that the guidance strategy is more suitable under low to medium traffic density and small safety interval condition.

  12. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  13. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  14. Simulation of Reversible Protein–Protein Binding and Calculation of Binding Free Energies Using Perturbed Distance Restraints

    PubMed Central

    2017-01-01

    Virtually all biological processes depend on the interaction between proteins at some point. The correct prediction of biomolecular binding free-energies has many interesting applications in both basic and applied pharmaceutical research. While recent advances in the field of molecular dynamics (MD) simulations have proven the feasibility of the calculation of protein–protein binding free energies, the large conformational freedom of proteins and complex free energy landscapes of binding processes make such calculations a difficult task. Moreover, convergence and reversibility of resulting free-energy values remain poorly described. In this work, an easy-to-use, yet robust approach for the calculation of standard-state protein–protein binding free energies using perturbed distance restraints is described. In the binding process the conformations of the proteins were restrained, as suggested earlier. Two approaches to avoid end-state problems upon release of the conformational restraints were compared. The method was evaluated by practical application to a small model complex of ubiquitin and the very flexible ubiquitin-binding domain of human DNA polymerase ι (UBM2). All computed free energy differences were closely monitored for convergence, and the calculated binding free energies had a mean unsigned deviation of only 1.4 or 2.5 kJ·mol–1 from experimental values. Statistical error estimates were in the order of thermal noise. We conclude that the presented method has promising potential for broad applicability to quantitatively describe protein–protein and various other kinds of complex formation. PMID:28898077

  15. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  16. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  17. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  18. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  19. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.

  20. Improving laboratory data entry quality using Six Sigma.

    PubMed

    Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks

    2013-01-01

    The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.

  1. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  2. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  3. Study of magnetic helicity injection in the active region NOAA 9236 producing multiple flare-associated coronal mass ejection events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sung-Hong; Cho, Kyung-Suk; Bong, Su-Chan

    To better understand a preferred magnetic field configuration and its evolution during coronal mass ejection (CME) events, we investigated the spatial and temporal evolution of photospheric magnetic fields in the active region NOAA 9236 that produced eight flare-associated CMEs during the time period of 2000 November 23-26. The time variations of the total magnetic helicity injection rate and the total unsigned magnetic flux are determined and examined not only in the entire active region but also in some local regions such as the main sunspots and the CME-associated flaring regions using SOHO/MDI magnetogram data. As a result, we found thatmore » (1) in the sunspots, a large amount of positive (right-handed) magnetic helicity was injected during most of the examined time period, (2) in the flare region, there was a continuous injection of negative (left-handed) magnetic helicity during the entire period, accompanied by a large increase of the unsigned magnetic flux, and (3) the flaring regions were mainly composed of emerging bipoles of magnetic fragments in which magnetic field lines have substantially favorable conditions for making reconnection with large-scale, overlying, and oppositely directed magnetic field lines connecting the main sunspots. These observational findings can also be well explained by some MHD numerical simulations for CME initiation (e.g., reconnection-favored emerging flux models). We therefore conclude that reconnection-favored magnetic fields in the flaring emerging flux regions play a crucial role in producing the multiple flare-associated CMEs in NOAA 9236.« less

  4. Indication of the Hanle Effect by Comparing the Scattering Polarization Observed by CLASP in the Ly α and Si iii 120.65 nm Lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, R.; Kubo, M.; Kano, R.

    The Chromospheric Lyman-Alpha Spectro-Polarimeter is a sounding rocket experiment that has provided the first successful measurement of the linear polarization produced by scattering processes in the hydrogen Ly α line (121.57 nm) radiation of the solar disk. In this paper, we report that the Si iii line at 120.65 nm also shows scattering polarization and we compare the scattering polarization signals observed in the Ly α and Si iii lines in order to search for observational signatures of the Hanle effect. We focus on four selected bright structures and investigate how the U / I spatial variations vary between themore » Ly α wing, the Ly α core, and the Si iii line as a function of the total unsigned photospheric magnetic flux estimated from Solar Dynamics Observatory /Helioseismic and Magnetic Imager observations. In an internetwork region, the Ly α core shows an antisymmetric spatial variation across the selected bright structure, but it does not show it in other more magnetized regions. In the Si iii line, the spatial variation of U / I deviates from the above-mentioned antisymmetric shape as the total unsigned photospheric magnetic flux increases. A plausible explanation of this difference is the operation of the Hanle effect. We argue that diagnostic techniques based on the scattering polarization observed simultaneously in two spectral lines with very different sensitivities to the Hanle effect, like Ly α and Si iii, are of great potential interest for exploring the magnetism of the upper solar chromosphere and transition region.« less

  5. Average capacity optimization in free-space optical communication system over atmospheric turbulence channels with pointing errors.

    PubMed

    Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui

    2010-10-01

    A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.

  6. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  7. Global Surface Temperature Change and Uncertainties Since 1861

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.

  8. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Evaluation and modification of five techniques for estimating stormwater runoff for watersheds in west-central Florida

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.

    1996-01-01

    Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea

  10. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  11. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  12. SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defoor, D; Kabat, C; Papanikolaou, N

    Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less

  13. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  14. The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors

    NASA Technical Reports Server (NTRS)

    Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan

    1993-01-01

    Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.

  15. The Whole Warps the Sum of Its Parts.

    PubMed

    Corbett, Jennifer E

    2017-01-01

    The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers' adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.

  16. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  17. First clinical experience in carbon ion scanning beam therapy: retrospective analysis of patient positional accuracy.

    PubMed

    Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi

    2012-09-01

    Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.

  18. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  19. Application of an Artificial Neural Network to the Prediction of OH Radical Reaction Rate Constants for Evaluating Global Warming Potential.

    PubMed

    Allison, Thomas C

    2016-03-03

    Rate constants for reactions of chemical compounds with hydroxyl radical are a key quantity used in evaluating the global warming potential of a substance. Experimental determination of these rate constants is essential, but it can also be difficult and time-consuming to produce. High-level quantum chemistry predictions of the rate constant can suffer from the same issues. Therefore, it is valuable to devise estimation schemes that can give reasonable results on a variety of chemical compounds. In this article, the construction and training of an artificial neural network (ANN) for the prediction of rate constants at 298 K for reactions of hydroxyl radical with a diverse set of molecules is described. Input to the ANN consists of counts of the chemical bonds and bends present in the target molecule. The ANN is trained using 792 (•)OH reaction rate constants taken from the NIST Chemical Kinetics Database. The mean unsigned percent error (MUPE) for the training set is 12%, and the MUPE of the testing set is 51%. It is shown that the present methodology yields rate constants of reasonable accuracy for a diverse set of inputs. The results are compared to high-quality literature values and to another estimation scheme. This ANN methodology is expected to be of use in a wide range of applications for which (•)OH reaction rate constants are required. The model uses only information that can be gathered from a 2D representation of the molecule, making the present approach particularly appealing, especially for screening applications.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuechler, Erich R.; Department of Chemistry, University of Minnesota, Minneapolis, Minnesota 55455-0431; Giese, Timothy J.

    To better represent the solvation effects observed along reaction pathways, and of ionic species in general, a charge-dependent variable-radii smooth conductor-like screening model (VR-SCOSMO) is developed. This model is implemented and parameterized with a third order density-functional tight binding quantum model, DFTB3/3OB-OPhyd, a quantum method which was developed for organic and biological compounds, utilizing a specific parameterization for phosphate hydrolysis reactions. Unlike most other applications with the DFTB3/3OB model, an auxiliary set of atomic multipoles is constructed from the underlying DFTB3 density matrix which is used to interact the solute with the solvent response surface. The resulting method is variational,more » produces smooth energies, and has analytic gradients. As a baseline, a conventional SCOSMO model with fixed radii is also parameterized. The SCOSMO and VR-SCOSMO models shown have comparable accuracy in reproducing neutral-molecule absolute solvation free energies; however, the VR-SCOSMO model is shown to reduce the mean unsigned errors (MUEs) of ionic compounds by half (about 2-3 kcal/mol). The VR-SCOSMO model presents similar accuracy as a charge-dependent Poisson-Boltzmann model introduced by Hou et al. [J. Chem. Theory Comput. 6, 2303 (2010)]. VR-SCOSMO is then used to examine the hydrolysis of trimethylphosphate and seven other phosphoryl transesterification reactions with different leaving groups. Two-dimensional energy landscapes are constructed for these reactions and calculated barriers are compared to those obtained from ab initio polarizable continuum calculations and experiment. Results of the VR-SCOSMO model are in good agreement in both cases, capturing the rate-limiting reaction barrier and the nature of the transition state.« less

  1. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  2. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  3. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  4. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  5. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  6. Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia

    PubMed Central

    Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.

    2009-01-01

    Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107

  7. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  8. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  9. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  10. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  11. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  12. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  13. Quantification and characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wood, Christopher J.; Gambetta, Jay M.

    2018-03-01

    We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.

  14. Jitter compensation circuit

    DOEpatents

    Sullivan, James S.; Ball, Don G.

    1997-01-01

    The instantaneous V.sub.co signal on a charging capacitor is sampled and the charge voltage on capacitor C.sub.o is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V.sub. co signal is split between a gain stage (G=0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V.sub.co signal is applied to the negative input of a differential amplifier gain stage (G=10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V.sub.co signal from the instantaneous value of sampled V.sub.co signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V.sub.co values squared divided by the total volt-second product of the magnetic compression circuit.

  15. Jitter compensation circuit

    DOEpatents

    Sullivan, J.S.; Ball, D.G.

    1997-09-09

    The instantaneous V{sub co} signal on a charging capacitor is sampled and the charge voltage on capacitor C{sub o} is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V{sub co} signal is split between a gain stage (G = 0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V{sub co} signal is applied to the negative input of a differential amplifier gain stage (G = 10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V{sub co} signal from the instantaneous value of sampled V{sub co} signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V{sub co} values squared divided by the total volt-second product of the magnetic compression circuit. 11 figs.

  16. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  17. Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22

    DTIC Science & Technology

    the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.

  18. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  19. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    PubMed

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.

  20. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    PubMed

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  1. The Impact of Subsampling on MODIS Level-3 Statistics of Cloud Optical Thickness and Effective Radius

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros

    2004-01-01

    The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.

  2. Interpreting the Latitudinal Structure of Differences Between Modeled and Observed Temperature Trends (Invited)

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Mears, C. A.; Gleckler, P. J.; Solomon, S.; Wigley, T.; Arblaster, J.; Cai, W.; Gillett, N. P.; Ivanova, D. P.; Karl, T. R.; Lanzante, J.; Meehl, G. A.; Stott, P.; Taylor, K. E.; Thorne, P.; Wehner, M. F.; Zou, C.

    2010-12-01

    We perform the most comprehensive comparison to date of simulated and observed temperature trends. Comparisons are made for different latitude bands, timescales, and temperature variables, using information from a multi-model archive and a variety of observational datasets. Our focus is on temperature changes in the lower troposphere (TLT), the mid- to upper troposphere (TMT), and at the sea surface (SST). For SST, TLT, and TMT, trend comparisons over the satellite era (1979 to 2009) always yield closest agreement in mid-latitudes of the Northern Hemisphere. There are pronounced discrepancies in the tropics and in the Southern Hemisphere: in both regions, the multi-model average warming is consistently larger than observed. At high latitudes in the Northern Hemisphere, the observed tropospheric warming exceeds multi-model average trends. The similarity in the latitudinal structure of this discrepancy pattern across different temperature variables and observational data sets suggests that these trend differences are real, and are not due to residual inhomogeneities in the observations. The interpretation of these results is hampered by the fact that the CMIP-3 multi-model archive analyzed here convolves errors in key external forcings with errors in the model response to forcing. Under a "forcing error" interpretation, model-average temperature trends in the Southern Hemisphere extratropics are biased warm because many models neglect (and/or inaccurately specify) changes in stratospheric ozone and the indirect effects of aerosols. An alternative "response error" explanation for the model trend errors is that there are fundamental problems with model clouds and ocean heat uptake over the Southern Ocean. When SST changes are compared over the longer period 1950 to 2009, there is close agreement between simulated and observed trends poleward of 50°S. This result is difficult to reconcile with the hypothesis that the trend discrepancies over 1979 to 2009 are primarily attributable to response errors. Our results suggest that biases in multi-model average temperature trends over the satellite era can be plausibly linked to forcing errors. Better partitioning of the forcing and response components of model errors will require a systematic program of numerical experimentation, with a focus on exploring the climate response to uncertainties in key historical forcings.

  3. Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor

    2015-04-01

    For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.

  4. [Use of personal computers by diplomats of anesthesiology in Japan].

    PubMed

    Yamamoto, K; Ohmura, S; Tsubokawa, T; Kita, M; Kushida, Y; Kobayashi, T

    1999-04-01

    Use of personal computers by diplomats of the Japanese Board of Anesthesiology working in Japanese university hospitals was investigated. Unsigned questionnaires were returned from 232 diplomats of 18 anesthesia departments. The age of responders ranged from twenties to sixties. Personal computer systems are used by 223 diplomats (96.1%), while nine (3.9%) do not use them. The computer systems used are: Apple Macintosh 77%, IBM compatible PC 21% and UNIX 2%. Although 197 diplomats have e-mail addresses, only 162 of them actually send and receive e-mails. Diplomats in fifties use e-mail most actively and those in sixties come second.

  5. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  6. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  7. Using video recording to identify management errors in pediatric trauma resuscitation.

    PubMed

    Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon

    2006-03-01

    To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.

  8. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  9. Comparison of algorithms for automatic border detection of melanoma in dermoscopy images

    NASA Astrophysics Data System (ADS)

    Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert

    2016-09-01

    Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.

  10. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  11. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  12. Flavour and identification threshold detection overview of Slovak adepts for certified testing.

    PubMed

    Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian

    2016-07-01

    During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.

  13. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  14. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  16. Value stream mapping of the Pap test processing procedure: a lean approach to improve quality and efficiency.

    PubMed

    Michael, Claire W; Naik, Kalyani; McVicker, Michael

    2013-05-01

    We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.

  17. SU-F-J-42: Comparison of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Cranial Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Shi, W; Andrews, D

    2016-06-15

    Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less

  18. Image guidance during head-and-neck cancer radiation therapy: analysis of alignment trends with in-room cone-beam computed tomography scans.

    PubMed

    Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy

    2012-06-01

    On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Stable estimate of primary OC/EC ratios in the EC tracer method

    NASA Astrophysics Data System (ADS)

    Chu, Shao-Hang

    In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.

  20. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  1. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.

  2. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  3. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  4. The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance

    ERIC Educational Resources Information Center

    Castellano, Katherine E.; McCaffrey, Daniel F.

    2017-01-01

    Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…

  5. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of this study was to describe fraction-calculation errors among fourth-grade students and to determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low-, average-, or high-achieving). We…

  6. Performance of Physical Examination Skills in Medical Students during Diagnostic Medicine Course in a University Hospital of Northwest China

    PubMed Central

    Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S.; Liu, Zhengwen; Lv, Yi; Shi, Bingyin

    2014-01-01

    This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students. PMID:25329685

  7. Performance of physical examination skills in medical students during diagnostic medicine course in a University Hospital of Northwest China.

    PubMed

    Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S; Liu, Zhengwen; Lv, Yi; Shi, Bingyin

    2014-01-01

    This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students.

  8. Assessing the Library Homepages of COPLAC Institutions for Section 508 Accessibility Errors: Who's Accessible, Who's Not, and How the Online WebXACT Assessment Tool Can Help

    ERIC Educational Resources Information Center

    Huprich, Julia; Green, Ravonne

    2007-01-01

    The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…

  9. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  10. Cost-effectiveness of the U.S. Geological Survey's stream-gaging programs in Massachusetts and Rhode Island

    USGS Publications Warehouse

    Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.

    1985-01-01

    The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)

  11. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  12. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  13. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  14. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  15. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  16. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    NASA Technical Reports Server (NTRS)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  17. NOTE: Optimization of megavoltage CT scan registration settings for thoracic cases on helical tomotherapy

    NASA Astrophysics Data System (ADS)

    Woodford, Curtis; Yartsev, Slav; Van Dyk, Jake

    2007-08-01

    This study aims to investigate the settings that provide optimum registration accuracy when registering megavoltage CT (MVCT) studies acquired on tomotherapy with planning kilovoltage CT (kVCT) studies of patients with lung cancer. For each experiment, the systematic difference between the actual and planned positions of the thorax phantom was determined by setting the phantom up at the planning isocenter, generating and registering an MVCT study. The phantom was translated by 5 or 10 mm, MVCT scanned, and registration was performed again. A root-mean-square equation that calculated the residual error of the registration based on the known shift and systematic difference was used to assess the accuracy of the registration process. The phantom study results for 18 combinations of different MVCT/kVCT registration options are presented and compared to clinical registration data from 17 lung cancer patients. MVCT studies acquired with coarse (6 mm), normal (4 mm) and fine (2 mm) slice spacings could all be registered with similar residual errors. No specific combination of resolution and fusion selection technique resulted in a lower residual error. A scan length of 6 cm with any slice spacing registered with the full image fusion selection technique and fine resolution will result in a low residual error most of the time. On average, large corrections made manually by clinicians to the automatic registration values are infrequent. Small manual corrections within the residual error averages of the registration process occur, but their impact on the average patient position is small. Registrations using the full image fusion selection technique and fine resolution of 6 cm MVCT scans with coarse slices have a low residual error, and this strategy can be clinically used for lung cancer patients treated on tomotherapy. Automatic registration values are accurate on average, and a quick verification on a sagittal MVCT slice should be enough to detect registration outliers.

  18. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  19. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.

  20. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).

  1. Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.

    PubMed

    Gapsys, Vytautas; de Groot, Bert L

    2017-12-12

    Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .

  2. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  3. Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data

    NASA Technical Reports Server (NTRS)

    Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen

    2006-01-01

    This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.

  4. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  5. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  6. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  7. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  8. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of the present study was to describe fraction-calculation errors among 4th-grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We…

  9. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  10. Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida

    USGS Publications Warehouse

    Turner, J.F.

    1979-01-01

    A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)

  11. ERROR IN ANNUAL AVERAGE DUE TO USE OF LESS THAN EVERYDAY MEASUREMENTS

    EPA Science Inventory

    Long term averages of the concentration of PM mass and components are of interest for determining compliance with annual averages, for developing exposure surrogated for cross-sectional epidemiologic studies of the long-term of PM, and for determination of aerosol sources by chem...

  12. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-01-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively. PMID:23837966

  13. A full Bayes before-after study accounting for temporal and spatial effects: Evaluating the safety impact of new signal installations.

    PubMed

    Sacchi, Emanuele; Sayed, Tarek; El-Basyouny, Karim

    2016-09-01

    Recently, important advances in road safety statistics have been brought about by methods able to address issues other than the choice of the best error structure for modeling crash data. In particular, accounting for spatial and temporal interdependence, i.e., the notion that the collision occurrence of a site or unit times depend on those of others, has become an important issue that needs further research. Overall, autoregressive models can be used for this purpose as they can specify that the output variable depends on its own previous values and on a stochastic term. Spatial effects have been investigated and applied mostly in the context of developing safety performance functions (SPFs) to relate crash occurrence to highway characteristics. Hence, there is a need for studies that attempt to estimate the effectiveness of safety countermeasures by including the spatial interdependence of road sites within the context of an observational before-after (BA) study. Moreover, the combination of temporal dynamics and spatial effects on crash frequency has not been explored in depth for SPF development. Therefore, the main goal of this research was to carry out a BA study accounting for spatial effects and temporal dynamics in evaluating the effectiveness of a road safety treatment. The countermeasure analyzed was the installation of traffic signals at unsignalized urban/suburban intersections in British Columbia (Canada). The full Bayes approach was selected as the statistical framework to develop the models. The results demonstrated that zone variation was a major component of total crash variability and that spatial effects were alleviated by clustering intersections together. Finally, the methodology used also allowed estimation of the treatment's effectiveness in the form of crash modification factors and functions with time trends. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.

    PubMed

    Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M

    2004-04-05

    In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.

  15. Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.

    PubMed

    Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M

    2006-01-01

    The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.

  16. MN15-L: A New Local Exchange-Correlation Functional for Kohn-Sham Density Functional Theory with Broad Accuracy for Atoms, Molecules, and Solids.

    PubMed

    Yu, Haoyu S; He, Xiao; Truhlar, Donald G

    2016-03-08

    Kohn-Sham density functional theory is widely used for applications of electronic structure theory in chemistry, materials science, and condensed-matter physics, but the accuracy depends on the quality of the exchange-correlation functional. Here, we present a new local exchange-correlation functional called MN15-L that predicts accurate results for a broad range of molecular and solid-state properties including main-group bond energies, transition metal bond energies, reaction barrier heights, noncovalent interactions, atomic excitation energies, ionization potentials, electron affinities, total atomic energies, hydrocarbon thermochemistry, and lattice constants of solids. The MN15-L functional has the same mathematical form as a previous meta-nonseparable gradient approximation exchange-correlation functional, MN12-L, but it is improved because we optimized it against a larger database, designated 2015A, and included smoothness restraints; the optimization has a much better representation of transition metals. The mean unsigned error on 422 chemical energies is 2.32 kcal/mol, which is the best among all tested functionals, with or without nonlocal exchange. The MN15-L functional also provides good results for test sets that are outside the training set. A key issue is that the functional is local (no nonlocal exchange or nonlocal correlation), which makes it relatively economical for treating large and complex systems and solids. Another key advantage is that medium-range correlation energy is built in so that one does not need to add damped dispersion by molecular mechanics in order to predict accurate noncovalent binding energies. We believe that the MN15-L functional should be useful for a wide variety of applications in chemistry, physics, materials science, and molecular biology.

  17. On the elimination of the electronic structure bottleneck in on the fly nonadiabatic dynamics for small to moderate sized (10-15 atom) molecules using fit diabatic representations based solely on ab initio electronic structure data: The photodissociation of phenol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiaolei, E-mail: virtualzx@gmail.com; Yarkony, David R., E-mail: yarkony@jhu.edu

    2016-01-14

    In this work, we demonstrate that for moderate sized systems, here a system with 13 atoms, global coupled potential energy surfaces defined for several electronic states over a wide energy range and for distinct regions of nuclear coordinate space characterized by distinct electron configurations, can be constructed with precise energetics and an excellent description of non-adiabatic interactions in all regions. This is accomplished using a recently reported algorithm for constructing quasi-diabatic representations, H{sup d}, of adiabatic electronic states coupled by conical intersections. In this work, the algorithm is used to construct an H{sup d} to describe the photodissociation of phenolmore » from its first and second excited electronic states. The representation treats all 33 internal degrees of freedom in an even handed manner. The ab initio adiabatic electronic structure data used to construct the fit are obtained exclusively from multireference configuration interaction with single and double excitation wave functions comprised of 88 × 10{sup 6} configuration state functions, at geometries determined by quasi-classical trajectories. Since the algorithm uses energy gradients and derivative couplings in addition to electronic energies to construct H{sup d}, data at only 7379 nuclear configurations are required to construct a representation, which describes all nuclear configurations involved in H atom photodissociation to produce the phenoxyl radical in its ground or first excited electronic state, with a mean unsigned energy error of 202.9 cm{sup −1} for electronic energies <60 000 cm{sup −1}.« less

  18. VR-SCOSMO: A smooth conductor-like screening model with charge-dependent radii for modeling chemical reactions.

    PubMed

    Kuechler, Erich R; Giese, Timothy J; York, Darrin M

    2016-04-28

    To better represent the solvation effects observed along reaction pathways, and of ionic species in general, a charge-dependent variable-radii smooth conductor-like screening model (VR-SCOSMO) is developed. This model is implemented and parameterized with a third order density-functional tight binding quantum model, DFTB3/3OB-OPhyd, a quantum method which was developed for organic and biological compounds, utilizing a specific parameterization for phosphate hydrolysis reactions. Unlike most other applications with the DFTB3/3OB model, an auxiliary set of atomic multipoles is constructed from the underlying DFTB3 density matrix which is used to interact the solute with the solvent response surface. The resulting method is variational, produces smooth energies, and has analytic gradients. As a baseline, a conventional SCOSMO model with fixed radii is also parameterized. The SCOSMO and VR-SCOSMO models shown have comparable accuracy in reproducing neutral-molecule absolute solvation free energies; however, the VR-SCOSMO model is shown to reduce the mean unsigned errors (MUEs) of ionic compounds by half (about 2-3 kcal/mol). The VR-SCOSMO model presents similar accuracy as a charge-dependent Poisson-Boltzmann model introduced by Hou et al. [J. Chem. Theory Comput. 6, 2303 (2010)]. VR-SCOSMO is then used to examine the hydrolysis of trimethylphosphate and seven other phosphoryl transesterification reactions with different leaving groups. Two-dimensional energy landscapes are constructed for these reactions and calculated barriers are compared to those obtained from ab initio polarizable continuum calculations and experiment. Results of the VR-SCOSMO model are in good agreement in both cases, capturing the rate-limiting reaction barrier and the nature of the transition state.

  19. The radiology digital dashboard: effects on report turnaround time.

    PubMed

    Morgan, Matthew B; Branstetter, Barton F; Lionetti, David M; Richardson, Jeremy S; Chang, Paul J

    2008-03-01

    As radiology departments transition to near-complete digital information management, work flows and their supporting informatics infrastructure are becoming increasingly complex. Digital dashboards can integrate separate computerized information systems and summarize key work flow metrics in real time to facilitate informed decision making. A PACS-integrated digital dashboard function designed to alert radiologists to their unsigned report queue status, coupled with an actionable link to the report signing application, resulted in a 24% reduction in the time between transcription and report finalization. The dashboard was well received by radiologists who reported high usage for signing reports. Further research is needed to identify and evaluate other potentially useful work flow metrics for inclusion in a radiology clinical dashboard.

  20. Empirical Behavioral Models to Support Alternative Tools for the Analysis of Mixed-Priority Pedestrian-Vehicle Interaction in a Highway Capacity Context

    PubMed Central

    Rouphail, Nagui M.

    2011-01-01

    This paper presents behavioral-based models for describing pedestrian gap acceptance at unsignalized crosswalks in a mixed-priority environment, where some drivers yield and some pedestrians cross in gaps. Logistic regression models are developed to predict the probability of pedestrian crossings as a function of vehicle dynamics, pedestrian assertiveness, and other factors. In combination with prior work on probabilistic yielding models, the results can be incorporated in a simulation environment, where they can more fully describe the interaction of these two modes. The approach is intended to supplement HCM analytical procedure for locations where significant interaction occurs between drivers and pedestrians, including modern roundabouts. PMID:21643488

  1. Cost effectiveness of the US Geological Survey's stream-gaging program in New York

    USGS Publications Warehouse

    Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.

    1986-01-01

    The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)

  2. Sources of error in estimating truck traffic from automatic vehicle classification data

    DOT National Transportation Integrated Search

    1998-10-01

    Truck annual average daily traffic estimation errors resulting from sample classification counts are computed in this paper under two scenarios. One scenario investigates an improper factoring procedure that may be used by highway agencies. The study...

  3. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  4. High order cell-centered scheme totally based on cell average

    NASA Astrophysics Data System (ADS)

    Liu, Ze-Yu; Cai, Qing-Dong

    2018-05-01

    This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.

  5. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  6. Intuitive theories of information: beliefs about the value of redundancy.

    PubMed

    Soll, J B

    1999-03-01

    In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

  7. Is ozone model bias driven by errors in cloud predictions? A quantitative assessment using satellite cloud retrievals in WRF-Chem

    NASA Astrophysics Data System (ADS)

    Ryu, Y. H.; Hodzic, A.; Barré, J.; Descombes, G.; Minnis, P.

    2017-12-01

    Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much of the bias in O3 predictions is caused by inaccurate cloud predictions. This study quantifies the errors in surface O3 predictions associated with clouds in summertime over CONUS using the Weather Research and Forecasting with Chemistry (WRF-Chem) model. Cloud fields used for photochemistry are corrected based on satellite cloud retrievals in sensitivity simulations. It is found that the WRF-Chem model is able to detect about 60% of clouds in the right locations and generally underpredicts cloud optical depths. The errors in hourly O3 due to the errors in cloud predictions can be up to 60 ppb. On average in summertime over CONUS, the errors in 8-h average O3 of 1-6 ppb are found to be attributable to those in cloud predictions under cloudy sky conditions. The contribution of changes in photolysis rates due to clouds is found to be larger ( 80 % on average) than that of light-dependent BVOC emissions. The effects of cloud corrections on O­3 are about 2 times larger in VOC-limited than NOx-limited regimes, suggesting that the benefits of accurate cloud predictions would be greater in VOC-limited than NOx-limited regimes.

  8. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, S; Hong, C; Kim, M

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less

  9. Comparison of Online 6 Degree-of-Freedom Image Registration of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Intracranial Radiosurgery.

    PubMed

    Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong

    2017-06-01

    The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.

  10. Joint optimization of a partially coherent Gaussian beam for free-space optical communication over turbulent channels with pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali

    2013-02-01

    Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.

  11. Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont

    USGS Publications Warehouse

    Smath, J.A.; Blackey, F.E.

    1986-01-01

    Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)

  12. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less

  13. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  14. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  15. Robust Tomography using Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas

    2013-03-01

    Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.

  16. Predictability of Solar Radiation for Photovoltaics systems over Europe: from short-term to seasonal time-scales

    NASA Astrophysics Data System (ADS)

    De Felice, Matteo; Petitta, Marcello; Ruti, Paolo

    2014-05-01

    Photovoltaic diffusion is steadily growing on Europe, passing from a capacity of almost 14 GWp in 2011 to 21.5 GWp in 2012 [1]. Having accurate forecast is needed for planning and operational purposes, with the possibility to model and predict solar variability at different time-scales. This study examines the predictability of daily surface solar radiation comparing ECMWF operational forecasts with CM-SAF satellite measurements on the Meteosat (MSG) full disk domain. Operational forecasts used are the IFS system up to 10 days and the System4 seasonal forecast up to three months. Forecast are analysed considering average and variance of errors, showing error maps and average on specific domains with respect to prediction lead times. In all the cases, forecasts are compared with predictions obtained using persistence and state-of-art time-series models. We can observe a wide range of errors, with the performance of forecasts dramatically affected by orography and season. Lower errors are on southern Italy and Spain, with errors on some areas consistently under 10% up to ten days during summer (JJA). Finally, we conclude the study with some insight on how to "translate" the error on solar radiation to error on solar power production using available production data from solar power plants. [1] EurObserver, "Baromètre Photovoltaïque, Le journal des énergies renouvables, April 2012."

  17. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  18. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  19. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  20. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  1. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  2. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  3. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  4. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  5. [A study of refractive errors in a primary school in Cotonou, Benin].

    PubMed

    Sounouvou, I; Tchabi, S; Doutetien, C; Sonon, F; Yehouessi, L; Bassabi, S K

    2008-10-01

    Determine the epidemiologic aspects and the degree of severity of different refractive errors in primary schoolchildren. A prospective and descriptive study was conducted from 1 December 2005 to 31 March 2006 on schoolchildren ranging from 4 to 16 years of age in a public primary school in Cotonou, Benin. The refraction was evaluated for any visual acuity lower than or equal to 0.7. The study included 1057 schoolchildren. The average age of the study population was 8.5+/-2.6 years with a slight predominance of females (51.8%). The prevalence of refractive error was 10.6% and astigmatism accounted for the most frequent refractive anomaly (91.9%). Myopia and the hyperopia were associated with astigmatism in 29.4% and 16.1% of the cases, respectively. The age bracket from 6 to 11 years accounted for the majority of refractive errors (75.9%), without age and sex being risk factors (p=0.811 and p=0.321, respectively). The average vision of the ametropic eye was 0.61, with a clear predominance of slight refractive errors (89.3%) and particularly of low-level simple astigmatism (45.5%). The relatively low prevalence of refractive error observed does not obviate the need for implementing actions to improve the ocular health of schoolchildren.

  6. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  7. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  8. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  9. Measuring Scale Errors in a Laser Tracker’s Horizontal Angle Encoder Through Simple Length Measurement and Two-Face System Tests

    PubMed Central

    Muralikrishnan, B.; Blackburn, C.; Sawyer, D.; Phillips, S.; Bridges, R.

    2010-01-01

    We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder’s error map to improve the tracker’s angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error. PMID:27134789

  10. A retrospective review of medical errors adjudicated in court between 2002 and 2012 in Spain.

    PubMed

    Giraldo, Priscila; Sato, Luke; Sala, María; Comas, Merce; Dywer, Kathy; Castells, Xavier

    2016-02-01

    This paper describes verdicts in court involving injury-producing medical errors in Spain. A descriptive analysis of 1041 closed court verdicts from Spain between January 2002 and December 2012. It was determined whether a medical error had occurred, and among those with medical error (n = 270), characteristics and results of litigation were analyzed. Data on litigation were obtained from the Thomson Reuters Aranzadi Westlaw databases. All verdicts involving health system were reviewed and classified according to the presence of medical error. Among those, contributory factors, medical specialty involved, health impact (death, disability and severity) and results of litigation (resolution, time to verdict and economic compensations) were described. Medical errors were involved in 25.9% of court verdicts. The cause of medical error was a diagnosis-related problem in 25.1% and surgical treatment in 22.2%, and Obstetrics-Gynecology was the most frequent involved specialty (21%). Most of them were of high severity (59.4%), one-third (32%) caused death. The average time interval between the occurrence of the error and the verdict was 7.8 years. The average indemnity payment was €239 505.24; the highest was psychiatry (€7 585 075.86) and the lowest was Emergency Medicine (€69 871.19). This study indicates that in Spain medical errors are common among verdicts involving the health system, most of them causing high-severity adverse outcomes. The interval between the medical error and the verdict is excessive, and there is a wide range of economic compensation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  11. SU-E-T-192: FMEA Severity Scores - Do We Really Know?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonigan, J; Johnson, J; Kry, S

    2014-06-01

    Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less

  12. Toward developing a standardized Arabic continuous text reading chart.

    PubMed

    Alabdulkader, Balsam; Leat, Susan Jennifer

    Near visual acuity is an essential measurement during an oculo-visual assessment. Short duration continuous text reading charts measure reading acuity and other aspects of reading performance. There is no standardized version of such chart in Arabic. The aim of this study is to create sentences of equal readability to use in the development of a standardized Arabic continuous text reading chart. Initially, 109 Arabic pairs of sentences were created for use in constructing a chart with similar layout to the Colenbrander chart. They were created to have the same grade level of difficulty and physical length. Fifty-three adults and sixteen children were recruited to validate the sentences. Reading speed in correct words per minute (CWPM) and standard length words per minute (SLWPM) was measured and errors were counted. Criteria based on reading speed and errors made in each sentence pair were used to exclude sentence pairs with more outlying characteristics, and to select the final group of sentence pairs. Forty-five sentence pairs were selected according to the elimination criteria. For adults, the average reading speed for the final sentences was 166 CWPM and 187 SLWPM and the average number of errors per sentence pair was 0.21. Childrens' average reading speed for the final group of sentences was 61 CWPM and 72 SLWPM. Their average error rate was 1.71. The reliability analysis showed that the final 45 sentence pairs are highly comparable. They will be used in constructing an Arabic short duration continuous text reading chart. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  13. Astrometric observations of visual binaries using 26-inch refractor during 2007-2014 at Pulkovo

    NASA Astrophysics Data System (ADS)

    Izmailov, I. S.; Roshchina, E. A.

    2016-04-01

    We present the results of 15184 astrometric observations of 322 visual binaries carried out in 2007-2014 at Pulkovo observatory. In 2007, the 26-inch refractor ( F = 10413 mm, D = 65 cm) was equipped with the CCD camera FLI ProLine 09000 (FOV 12' × 12', 3056 × 3056 pixels, 0.238 arcsec pixel-1). Telescope automation and weather monitoring system installation allowed us to increase the number of observations significantly. Visual binary and multiple systems with an angular distance in the interval 1."1-78."6 with 7."3 on average were included in the observing program. The results were studied in detail for systematic errors using calibration star pairs. There was no detected dependence of errors on temperature, pressure, and hour angle. The dependence of the 26-inch refractor's scale on temperature was taken into account in calculations. The accuracy of measurement of a single CCD image is in the range of 0."0005 to 0."289, 0."021 on average along both coordinates. Mean errors in annual average values of angular distance and position angle are equal to 0."005 and 0.°04 respectively. The results are available here http://izmccd.puldb.ru/vds.htmand in the Strasbourg Astronomical Data Center (CDS). In the catalog, the separations and position angles per night of observation and annual average as well as errors for all the values and standard deviations of a single observation are presented. We present the results of comparison of 50 pairs of stars with known orbital solutions with ephemerides.

  14. Measurement of vertebral rotation: Perdriolle versus Raimondi.

    PubMed

    Weiss, H R

    1995-01-01

    The measurement of vertebral rotation according to Perdriolle is widely used in the French-speaking and Anglo-American countries. Even in this measurement technique there may be a relatively high estimation error because of the not very accurate grading in steps of 5 degrees. The measurement according to Raimondi seems to be easier to use and is more accurate, with 2 degrees steps. The purpose of our study was to determine the technical error of both measuring methods. The apex vertebra of 40 curves on 20 anteroposterior (AP) radiographs were measured by using the Perdriolle torsion meter and the Regolo Raimondi. Interrater and intrarater reliability were computed. The thoracic Cobb angle was 43 degrees, the lumbar Cobb angle 36 degrees. The average rotation according to Perdriolle was 19.1 degrees thoracic (SD 11.14), 12.7 degrees lumbar (11.21). Measurement of vertebral rotation according to Raimondi showed an average rotation of 20.25 degrees in the thoracic region (11.40) and 13.4 degrees lumbar (10.92). The intrarater reliability was r = 0.991 (Perdriolle) and r = 0.997 (Raimondi). The average intrarater error was 1.025 degrees in the Perdriolle measurement and 0.4 degrees in the Raimondi measurement. Interrater error was on average 3.112 degrees for the Perdriolle measurement and 3.630 degrees for the Raimondi measurement. This shows that both methods are useful tools for the follow-up of vertebral rotation as projected on standard X-rays for the experienced clinical. The Raimondi ruler is easier to use and is slightly more reliable.

  15. Forces associated with pneumatic power screwdriver operation: statics and dynamics.

    PubMed

    Lin, Jia-Hua; Radwin, Robert G; Fronczak, Frank J; Richard, Terry G

    2003-10-10

    The statics and dynamics of pneumatic power screwdriver operation were investigated in the context of predicting forces acting against the human operator. A static force model is described in the paper, based on tool geometry, mass, orientation in space, feed force, torque build up, and stall torque. Three common power hand tool shapes are considered, including pistol grip, right angle, and in-line. The static model estimates handle force needed to support a power nutrunner when it acts against the tightened fastener with a constant torque. A system of equations for static force and moment equilibrium conditions are established, and the resultant handle force (resolved in orthogonal directions) is calculated in matrix form. A dynamic model is formulated to describe pneumatic motor torque build-up characteristics dependent on threaded fastener joint hardness. Six pneumatic tools were tested to validate the deterministic model. The average torque prediction error was 6.6% (SD = 5.4%) and the average handle force prediction error was 6.7% (SD = 6.4%) for a medium-soft threaded fastener joint. The average torque prediction error was 5.2% (SD = 5.3%) and the average handle force prediction error was 3.6% (SD = 3.2%) for a hard threaded fastener joint. Use of these equations for estimating handle forces based on passive mechanical elements representing the human operator is also described. These models together should be useful for considering tool handle force in the selection and design of power screwdrivers, particularly for minimizing handle forces in the prevention of injuries and work related musculoskeletal disorders.

  16. Derivation and precision of mean field electrodynamics with mesoscale fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Hongzhe; Blackman, Eric G.

    2018-06-01

    Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.

  17. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  18. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  19. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  20. An Adaptive 6-DOF Tracking Method by Hybrid Sensing for Ultrasonic Endoscopes

    PubMed Central

    Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin

    2014-01-01

    In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179

  1. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  2. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  3. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  4. Reliability and Validity Assessment of a Linear Position Transducer

    PubMed Central

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  5. The effects of extinction-aroused attention on context conditioning.

    PubMed

    Nelson, James Byron; Fabiano, Andrew M; Lamoureux, Jeffrey A

    2018-04-01

    Two experiments assessed the effects of extinguishing a conditioned cue on subsequent context conditioning. Each experiment used a different video-game method where sensors predicted attacking spaceships and participants responded to the sensor in a way that prepared them for the upcoming attack. In Experiment 1 extinction of a cue which signaled a spaceship-attack outcome facilitated subsequent learning when the attack occurred unsignaled. In Experiment 2 extinction of a cue facilitated subsequent learning, regardless of whether the spaceship outcome was the same or different as used in the earlier training. In neither experiment did the extinction context become inhibitory. Results are discussed in terms of current associative theories of attention and conditioning. © 2018 Nelson et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Effect of planar cuts' orientation on the perceived surface layout and object's shape.

    PubMed

    Bocheva, Nadejda

    2009-07-01

    The effect of the orientation of the cutting planes producing planar curves over the surface of an object on its perceived pose and shape was investigated for line drawings representing three-dimensional objects. The results suggest that the orientational flow produced by the surface curves introduces an apparent object rotation in depth and in the image plane and changes in its perceived elongation. The apparent location of the nearest points is determined by the points of maximal view-dependent unsigned curvature of the surface curves. The data are discussed in relation to the interaction of the shape-from-silhouette system and shape-from-contour system and its effect on the interpretation of the surface contours with respect to the surface geometry.

  7. Nonparametric weighted stochastic block models

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2018-01-01

    We present a Bayesian formulation of weighted stochastic block models that can be used to infer the large-scale modular structure of weighted networks, including their hierarchical organization. Our method is nonparametric, and thus does not require the prior knowledge of the number of groups or other dimensions of the model, which are instead inferred from data. We give a comprehensive treatment of different kinds of edge weights (i.e., continuous or discrete, signed or unsigned, bounded or unbounded), as well as arbitrary weight transformations, and describe an unsupervised model selection approach to choose the best network description. We illustrate the application of our method to a variety of empirical weighted networks, such as global migrations, voting patterns in congress, and neural connections in the human brain.

  8. Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.

    PubMed

    Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim

    2007-09-01

    The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.

  9. Error rate of automated calculation for wound surface area using a digital photography.

    PubMed

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Statistics of the radiated field of a space-to-earth microwave power transfer system

    NASA Technical Reports Server (NTRS)

    Stevens, G. H.; Leininger, G.

    1976-01-01

    Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).

  11. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  12. A Quantum Theoretical Explanation for Probability Judgment Errors

    ERIC Educational Resources Information Center

    Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.

    2011-01-01

    A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…

  13. Geometrical correction factors for heat flux meters

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1974-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.

  14. Effect of gage size on the measurement of local heat flux. [formulas for determining gage averaging errors

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1973-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. In addition, a correction procedure is presented which allows a better estimate for the true value of the local heat flux. As an example of the technique, the formulas are applied to the cases of heat transfer to air slot jets impinging on flat and concave surfaces. It is shown that for many practical problems, the use of very small heat flux gages is often unnecessary.

  15. Multi-temporal AirSWOT elevations on the Willamette river: error characterization and algorithm testing

    NASA Astrophysics Data System (ADS)

    Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.

    2017-12-01

    We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).

  16. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; hide

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  17. Measurements of aperture averaging on bit-error-rate

    NASA Astrophysics Data System (ADS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-08-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  18. The Use of a “Hybrid” Trainer in an Established Laparoscopic Skills Program

    PubMed Central

    Colsant, Brian J.; Lynch, Paul J.; Herman, Björn; Klonsky, Jonathan; Young, Steven M.

    2006-01-01

    Objectives: Tabletop inanimate trainers have proven to be a safe, inexpensive, and convenient platform for developing laparoscopic skills. Historically, programs that utilize these trainers rely on subjective evaluation of errors and time as the only measures of performance. Virtual reality simulators offer more extensive data collection capability, but they are expensive and lack realism. This study reviews a new electronic proctor (EP), and its performance within the Rosser Top Gun Laparoscopic Skills and Suturing Program. This “hybrid” training device seeks to capture the strengths of both platforms by providing an affordable, reliable, realistic training arena with metrics to objectively evaluate performance. Methods: An electronic proctor was designed for use in conjunction with drills from the Top Gun Program. The tabletop trainers used were outfitted with an automated electromechanically monitored task arena. Subjects performed 10 repetitions of each of 3 drills: “Cup Drop,” “Triangle Transfer,” and “Intracorporeal Suturing.” In real time, this device evaluates for instrument targeting accuracy, economy of motion, and adherence to the rules of the exercises. A buzzer and flashing light serve to alert the student to inaccuracies and breaches of the defined skill transference parameters. Results: Between July 2001 and June 2003, 117 subjects participated in courses. Seventy-three who met data evaluation criteria were assessed and compared with 744 surgeons who had previously taken the course. The total time to complete each task was significantly longer with the EP in place. The Cup Drop drill with the EP had a mean total time of 1661 seconds (average, 166.10) with 54.49 errors (average, 5.45) vs. 1252 seconds (average, 125.2) without the EP (P=0.000, t=6.735, df=814). The Triangle Transfer drill mean total time was 556 seconds (average, 55.63) and 167.57 errors (average. 16.75) (EP) vs. 454 seconds (non-EP) (average. 45.4) (P=0.000, t=4.447, df=814). The mean total times of the suturing task was 1777 seconds (average, 177.73) and 90.46 errors (average. 9.04) (EP) vs. 1682 seconds (non-EP) (average, 168.2) (P=0.040, t=1.150, df=814). When compared with surgeons who had participated in the Top Gun course prior to EP, the participants in the study collectively scored in the 18.3th percentile with the Cup Drop drill, 22.6th percentile with the Triangle Transfer drill, and 36.7th percentile with the Intracorporeal Suturing exercise. When penalizing for errors recorded by the EP, participants scored collectively in the 9.9th, 0.1th, and 17.7th percentile, respectively. No equipment failures occurred, and the agenda of the course did not have to be modified to accommodate the new platform. Conclusions: The EP utilized during the Top Gun Course was introduced without modification of the core curriculum and experienced no device failures. This hybrid trainer offers a cost-effective inanimate simulator that brings quality performance monitoring to traditional inanimate trainers. It appears that the EP influenced student performance by alerting them to errors made, thus causing an increased awareness of and focus on precision and accuracy. This suggests that the EP could have internal guidance capabilities. However, validation studies must be done in the future. PMID:16709348

  19. Toward attenuating the impact of arm positions on electromyography pattern-recognition based motion classification in transradial amputees

    PubMed Central

    2012-01-01

    Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049

  20. Landsat-8 TIRS thermal radiometric calibration status

    USGS Publications Warehouse

    Barsi, Julia A.; Markham, Brian L.; Montanaro, Matthew; Gerace, Aaron; Hook, Simon; Schott, John R.; Raqueno, Nina G.; Morfitt, Ron

    2017-01-01

    The Thermal Infrared Sensor (TIRS) instrument is the thermal-band imager on the Landsat-8 platform. The initial onorbit calibration estimates of the two TIRS spectral bands indicated large average radiometric calibration errors, -0.29 and -0.51 W/m2 sr μm or -2.1K and -4.4K at 300K in Bands 10 and 11, respectively, as well as high variability in the errors, 0.87K and 1.67K (1-σ), respectively. The average error was corrected in operational processing in January 2014, though, this adjustment did not improve the variability. The source of the variability was determined to be stray light from far outside the field of view of the telescope. An algorithm for modeling the stray light effect was developed and implemented in the Landsat-8 processing system in February 2017. The new process has improved the overall calibration of the two TIRS bands, reducing the residual variability in the calibration from 0.87K to 0.51K at 300K for Band 10 and from 1.67K to 0.84K at 300K for Band 11. There are residual average lifetime bias errors in each band: 0.04 W/m2 sr μm (0.30K) and -0.04 W/m2 sr μm (-0.29K), for Bands 10 and 11, respectively.

  1. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  2. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  3. Increased error-related thalamic activity during early compared to late cocaine abstinence.

    PubMed

    Li, Chiang-Shan R; Luo, Xi; Sinha, Rajita; Rounsaville, Bruce J; Carroll, Kathleen M; Malison, Robert T; Ding, Yu-Shin; Zhang, Sheng; Ide, Jaime S

    2010-06-01

    Altered cognitive control is implicated in the shaping of cocaine dependence. One of the key component processes of cognitive control is error monitoring. Our previous imaging work highlighted greater activity in distinct cortical and subcortical regions including the dorsal anterior cingulate cortex (dACC), thalamus and insula when participants committed an error during the stop signal task (Li et al., 2008b). Importantly, dACC, thalamic and insular activity has been associated with drug craving. One hypothesis is that the intense interoceptive activity during craving prevents these cerebral structures from adequately registering error and/or monitoring performance. Alternatively, the dACC, thalamus and insula show abnormally heightened responses to performance errors, suggesting that excessive responses to salient stimuli such as drug cues could precipitate craving. The two hypotheses would each predict decreased and increased activity during stop error (SE) as compared to stop success (SS) trials in the SST. Here we showed that cocaine dependent patients (PCD) experienced greater subjective feeling of loss of control and cocaine craving during early (average of day 6) compared to late (average of day 18) abstinence. Furthermore, compared to PCD during late abstinence, PCD scanned during early abstinence showed increased thalamic as well as insular but not dACC responses to errors (SE>SS). These findings support the hypothesis that heightened thalamic reactivity to salient stimuli co-occur with cocaine craving and loss of self control. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Characteristics of Low-latitude Coronal Holes near the Maximum of Solar Cycle 24

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmeister, Stefan J.; Veronig, Astrid; Reiss, Martin A.

    We investigate the statistics of 288 low-latitude coronal holes extracted from SDO /AIA-193 filtergrams over the time range of 2011 January 01–2013 December 31. We analyze the distribution of characteristic coronal hole properties, such as the areas, mean AIA-193 intensities, and mean magnetic field densities, the local distribution of the SDO /AIA-193 intensity and the magnetic field within the coronal holes, and the distribution of magnetic flux tubes in coronal holes. We find that the mean magnetic field density of all coronal holes under study is 3.0 ± 1.6 G, and the percentaged unbalanced magnetic flux is 49 ± 16%.more » The mean magnetic field density, the mean unsigned magnetic field density, and the percentaged unbalanced magnetic flux of coronal holes depend strongly pairwise on each other, with correlation coefficients cc > 0.92. Furthermore, we find that the unbalanced magnetic flux of the coronal holes is predominantly concentrated in magnetic flux tubes: 38% (81%) of the unbalanced magnetic flux of coronal holes arises from only 1% (10%) of the coronal hole area, clustered in magnetic flux tubes with field strengths >50 G (10 G). The average magnetic field density and the unbalanced magnetic flux derived from the magnetic flux tubes correlate with the mean magnetic field density and the unbalanced magnetic flux of the overall coronal hole (cc>0.93). These findings give evidence that the overall magnetic characteristics of coronal holes are governed by the characteristics of the magnetic flux tubes.« less

  5. Email notification combined with off site signing substantially reduces resident approval to faculty verification time.

    PubMed

    Deitte, Lori A; Moser, Patricia P; Geller, Brian S; Sistrom, Chris L

    2011-06-01

    Attending radiologist signature time (AST) is a variable and modifiable component of overall report turnaround time. Delays in finalized reports have potential to undermine radiologists' value as consultants and adversely affect patient care. This study was performed to evaluate the impact of notebook computer distribution and daily automated e-mail notification on reducing AST. Two simultaneous interventions were initiated in the authors' radiology department in February 2010. These included the distribution of a notebook computer with preloaded software for each attending radiologist to sign radiology reports and daily automated e-mail notifications for unsigned reports. The digital dictation system archive and the radiology information system were queried for all radiology reports produced from January 2009 through August 2010. The time between resident approval and attending radiologist signature before and after the intervention was analyzed. Potential unintended "side effects" of the intervention were also studied. Resident-authored reports were signed, on average, 2.53 hours sooner after the intervention. This represented a highly significant (P = .003) decrease in AST with all else held equal. Postintervention reports were authored by residents at the same rate (about 70%). An unintended "side effect" was that attending radiologists were less likely to make changes to resident-authored reports after the intervention. E-mail notification combined with offsite signing can reduce AST substantially. Notebook computers with preloaded software streamline the process of accessing, editing, and signing reports. The observed decrease in AST reflects a positive change in the timeliness of report signature. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  6. SU-E-T-646: Quality Assurance of Truebeam Multi-Leaf Collimator Using a MLC QA Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Lu, J; Hong, D

    2015-06-15

    Purpose: To perform a routine quality assurance procedure for Truebeam multi-leaf collimator (MLC) using MLC QA phantom, verify the stability and reliability of MLC during the treatment. Methods: MLC QA phantom is a specialized phantom for MLC quality assurance (QA), and contains five radio-opaque spheres that are embedded in an “L” shape. The phantom was placed isocentrically on the Truebeam treatment couch for the tests. A quality assurance plan was setted up in the Eclipse v10.0, the fields that need to be delivered in order to acquire the necessary images, the MLC shapes can then be obtained by the images.more » The images acquired by the electronic portal imaging device (EPID), and imported into the PIPSpro software for the analysis. The tests were delivered twelve weeks (once a week) to verify consistency of the delivery, and the images are acquired in the same manner each time. Results: For the Leaf position test, the average position error was 0.23mm±0.02mm (range: 0.18mm∼0.25mm). The Leaf width was measured at the isocenter, the average error was 0.06mm±0.02mm (range: 0.02mm∼0.08mm) for the Leaf width test. Multi-Port test showed the dynamic leaf shift error, the average error was 0.28mm±0.03mm (range: 0.2mm∼0.35mm). For the leaf transmission test, the average inter-leaf leakage value was 1.0%±0.17% (range: 0.8%∼1.3%) and the average inter-bank leakage value was 32.6%±2.1% (range: 30.2%∼36.1%). Conclusion: By the test of 12 weeks, the MLC system of the Truebeam is running in a good condition and the MLC system can be steadily and reliably carried out during the treatment. The MLC QA phantom is a useful test tool for the MLC QA.« less

  7. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chengqiang, L; Yin, Y; Chen, L

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans.more » Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.« less

  9. Quantifying the impact of daily and seasonal variation in sap pH on xylem dissolved inorganic carbon estimates in plum trees.

    PubMed

    Erda, F G; Bloemen, J; Steppe, K

    2014-01-01

    In studies on internal CO2 transport, average xylem sap pH (pH(x)) is one of the factors used for calculation of the concentration of dissolved inorganic carbon in the xylem sap ([CO2 *]). Lack of detailed pH(x) measurements at high temporal resolution could be a potential source of error when evaluating [CO2*] dynamics. In this experiment, we performed continuous measurements of CO2 concentration ([CO2]) and stem temperature (T(stem)), complemented with pH(x) measurements at 30-min intervals during the day at various stages of the growing season (Day of the Year (DOY): 86 (late winter), 128 (mid-spring) and 155 (early summer)) on a plum tree (Prunus domestica L. cv. Reine Claude d'Oullins). We used the recorded pH(x) to calculate [CO2*] based on T(stem) and the corresponding measured [CO2]. No statistically significant difference was found between mean [CO2*] calculated with instantaneous pH(x) and daily average pH(x). However, using an average pH(x) value from a different part of the growing season than the measurements of [CO2] and T(stem) to estimate [CO2*] led to a statistically significant error. The error varied between 3.25 ± 0.01% under-estimation and 3.97 ± 0.01% over-estimation, relative to the true [CO2*] data. Measured pH(x) did not show a significant daily variation, unlike [CO2], which increased during the day and declined at night. As the growing season progressed, daily average [CO2] (3.4%, 5.3%, 7.4%) increased and average pH(x) (5.43, 5.29, 5.20) decreased. Increase in [CO2] will increase its solubility in xylem sap according to Henry's law, and the dissociation of [CO2*] will negatively affect pH(x). Our results are the first quantifying the error in [CO2*] due to the interaction between [CO2] and pH(x) on a seasonal time scale. We found significant changes in pH(x) across the growing season, but overall the effect on the calculation of [CO2*] remained within an error range of 4%. However, it is possible that the error could be more substantial for other tree species, particularly if pH(x) is in the more sensitive range (pH(x) > 6.5). © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.

  10. Feasibility of predicting tumor motion using online data acquired during treatment and a generalized neural network optimized with offline patient tumor trajectories.

    PubMed

    Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen

    2018-02-01

    The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.

  11. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  12. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  13. Standardized Protocol for Virtual Surgical Plan and 3-Dimensional Surgical Template-Assisted Single-Stage Mandible Contour Surgery.

    PubMed

    Fu, Xi; Qiao, Jia; Girod, Sabine; Niu, Feng; Liu, Jian Feng; Lee, Gordon K; Gui, Lai

    2017-09-01

    Mandible contour surgery, including reduction gonioplasty and genioplasty, has become increasingly popular in East Asia. However, it is technically challenging and, hence, leads to a long learning curve and high complication rates and often needs secondary revisions. The increasing use of 3-dimensional (3D) technology makes accurate single-stage mandible contour surgery with minimum complication rates possible with a virtual surgical plan (VSP) and 3-D surgical templates. This study is to establish a standardized protocol for VSP and 3-D surgical templates-assisted mandible contour surgery and evaluate the accuracy of the protocol. In this study, we enrolled 20 patients for mandible contour surgery. Our protocol is to perform VSP based on 3-D computed tomography data. Then, design and 3-D print surgical templates based on preoperative VSP. The accuracy of the method was analyzed by 3-D comparison of VSP and postoperative results using detailed computer analysis. All patients had symmetric, natural osteotomy lines and satisfactory facial ratios in a single-stage operation. The average relative error of VSP and postoperative result on the entire skull was 0.41 ± 0.13 mm. The average new left gonial error was 0.43 ± 0.77 mm. The average new right gonial error was 0.45 ± 0.69 mm. The average pognion error was 0.79 ± 1.21 mm. Patients were very satisfied with the aesthetic results. Surgeons were very satisfied with the performance of surgical templates to facilitate the operation. Our standardized protocol of VSP and 3-D printed surgical templates-assisted single-stage mandible contour surgery results in accurate, safe, and predictable outcome in a single stage.

  14. The early results of excimer laser photorefractive keratectomy for compound myopic astigmatism.

    PubMed

    Horgan, S E; Pearson, R V

    1996-01-01

    An excimer laser (VISX Twenty/Twenty Excimer Refractive System) was used to treat 51 eyes for myopia and astigmatism. Uncorrected pretreatment visual acuity was between 6/18 and 6/60 (log unit +0.45 to +1.0) in 59% and worse than 6/60 in 29%. The mean pretreatment spherical refractive error was -4.05 dioptre (range 1.25 to 13.25), and the mean pretreatment cylindrical error was -0.97 dioptre (range 0.25 to 4.00). Uncorrected visual acuity measured 6/6 or better (log unit 0.0 or less) in 80% at three months, and averaged 6/6 for all eyes at six months post-treatment, with 75% eyes obtaining 6/6 or better. The mean post-treatment spherical error decayed according to pre-treatment values, with a mean sphere of -0.20 dioptre for eyes initially less than -2.00 dioptre, -0.40 dioptre (for those between -2.25 and -3.00), -0.71 dioptre (for those between -4.25 and -5.00), and -1.15 dioptre for eyes initially above -6.25 dioptre. Vectored cylindrical correction exhibited response proportional to initial refraction, with a mean post-treatment cylinder of -1.83 dioptre for eyes formerly averaging -3.08 dioptre, -0.55 dioptre (eyes initially averaging -1.63 dioptre), and -0.51 dioptre (eyes initially averaging -0.67 dioptre). Vector analysis of post-treatment astigmatism showed 58% eyes exhibiting 51 or more degrees of axis shift, although 34% eyes remained within 20 degrees of their pretreatment axis. An effective reduction in spherocylindrical error was achieved with all eyes, although axis misalignment was a common event.

  15. Refractive errors in patients with newly diagnosed diabetes mellitus.

    PubMed

    Yarbağ, Abdülhekim; Yazar, Hayrullah; Akdoğan, Mehmet; Pekgör, Ahmet; Kaleli, Suleyman

    2015-01-01

    Diabetes mellitus is a complex metabolic disorder that involves the small blood vessels, often causing widespread damage to tissues, including the eyes' optic refractive error. In patients with newly diagnosed diabetes mellitus who have unstable blood glucose levels, refraction may be incorrect. We aimed to investigate refraction in patients who were recently diagnosed with diabetes and treated at our centre. This prospective study was performed from February 2013 to January 2014. Patients were diagnosed with diabetes mellitus using laboratory biochemical tests and clinical examination. Venous fasting plasma glucose (fpg) levels were measured along with refractive errors. Two measurements were taken: initially and after four weeks. The last difference between the initial and end refractive measurements were evaluated. Our patients were 100 males and 30 females who had been newly diagnosed with type II DM. The refractive and fpg levels were measured twice in all patients. The average values of the initial measurements were as follows: fpg level, 415 mg/dl; average refractive value, +2.5 D (Dioptres). The average end of period measurements were fpg, 203 mg/dl; average refractive value, +0.75 D. There is a statistically significant difference between after four weeks measurements with initially measurements of fasting plasma glucose (fpg) levels (p<0.05) and there is a statistically significant relationship between changes in fpg changes with glasses ID (p<0.05) and the disappearance of blurred vision (to be greater than 50% success rate) were statistically significant (p<0.05). Also, were detected upon all these results the absence of any age and sex effects (p>0.05). Refractive error is affected in patients with newly diagnosed diabetes mellitus; therefore, plasma glucose levels should be considered in the selection of glasses.

  16. Quality assurance of dynamic parameters in volumetric modulated arc therapy.

    PubMed

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-07-01

    The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Three tests (for gantry position-dose delivery synchronisation, gantry speed-dose delivery synchronisation and MLC leaf speed and positions) were performed. The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the "beginning" and "end" errors. For MLC position verification, the maximum error was -2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. This experiment demonstrates that the variables and parameters of the Synergy S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC.

  17. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  18. Mapping DNA polymerase errors by single-molecule sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David F.; Lu, Jenny; Chang, Seungwoo

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  19. Mapping DNA polymerase errors by single-molecule sequencing

    DOE PAGES

    Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...

    2016-05-16

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  20. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  1. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  2. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  3. Variability in Pretest-Posttest Correlation Coefficients by Student Achievement Level. NCEE 2011-4033

    ERIC Educational Resources Information Center

    Cole, Russell; Haimson, Joshua; Perez-Johnson, Irma; May, Henry

    2011-01-01

    State assessments are increasingly used as outcome measures for education evaluations. The scaling of state assessments produces variability in measurement error, with the conditional standard error of measurement increasing as average student ability moves toward the tails of the achievement distribution. This report examines the variability in…

  4. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  5. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  6. Scheduling periodic jobs that allow imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1990-01-01

    The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.

  7. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  8. ALT space shuttle barometric altimeter altitude analysis

    NASA Technical Reports Server (NTRS)

    Killen, R.

    1978-01-01

    The accuracy was analyzed of the barometric altimeters onboard the space shuttle orbiter. Altitude estimates from the air data systems including the operational instrumentation and the developmental flight instrumentation were obtained for each of the approach and landing test flights. By comparing the barometric altitude estimates to altitudes derived from radar tracking data filtered through a Kalman filter and fully corrected for atmospheric refraction, the errors in the barometric altitudes were shown to be 4 to 5 percent of the Kalman altitudes. By comparing the altitude determined from the true atmosphere derived from weather balloon data to the altitude determined from the U.S. Standard Atmosphere of 1962, it was determined that the assumption of the Standard Atmosphere equations contributes roughly 75 percent of the total error in the baro estimates. After correcting the barometric altitude estimates using an average summer model atmosphere computed for the average latitude of the space shuttle landing sites, the residual error in the altitude estimates was reduced to less than 373 feet. This corresponds to an error of less than 1.5 percent for altitudes above 4000 feet for all flights.

  9. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  10. Use of streamflow data to estimate base flowground-water recharge for Wisconsin

    USGS Publications Warehouse

    Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.

    2007-01-01

    The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.

  11. Incorporating GIS building data and census housing statistics for sub-block-level population estimation

    USGS Publications Warehouse

    Wu, S.-S.; Wang, L.; Qiu, X.

    2008-01-01

    This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.

  12. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  13. Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems

    NASA Astrophysics Data System (ADS)

    Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen

    2017-06-01

    In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.

  14. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  15. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  17. Analyzing average and conditional effects with multigroup multilevel structural equation models

    PubMed Central

    Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf

    2014-01-01

    Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668

  18. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caillet, V; Colvill, E; Royal North Shore Hospital, St Leonards, Sydney

    2016-06-15

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physicalmore » experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use of either algorithm for clinical implementation.« less

  19. Influence of Familiarization and Competitive Level on the Reliability of Countermovement Vertical Jump Kinetic and Kinematic Variables.

    PubMed

    Nibali, Maria L; Tombleson, Tom; Brady, Philip H; Wagner, Phillip

    2015-10-01

    Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 - trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: -0.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% ×/÷ 1.10), although jump height was the only variable to display a %CV ≤SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.

  20. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    PubMed

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  1. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  2. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  3. Alignment error envelopes for single particle analysis.

    PubMed

    Jensen, G J

    2001-01-01

    To determine the structure of a biological particle to high resolution by electron microscopy, image averaging is required to combine information from different views and to increase the signal-to-noise ratio. Starting from the number of noiseless views necessary to resolve features of a given size, four general factors are considered that increase the number of images actually needed: (1) the physics of electron scattering introduces shot noise, (2) thermal motion and particle inhomogeneity cause the scattered electrons to describe a mixture of structures, (3) the microscope system fails to usefully record all the information carried by the scattered electrons, and (4) image misalignment leads to information loss through incoherent averaging. The compound effect of factors 2-4 is approximated by the product of envelope functions. The problem of incoherent image averaging is developed in detail through derivation of five envelope functions that account for small errors in 11 "alignment" parameters describing particle location, orientation, defocus, magnification, and beam tilt. The analysis provides target error tolerances for single particle analysis to near-atomic (3.5 A) resolution, and this prospect is shown to depend critically on image quality, defocus determination, and microscope alignment. Copyright 2001 Academic Press.

  4. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  5. Theoretical Accuracy of Global Snow-Cover Mapping Using Satellite Data in the Earth Observing System (EOS) Era

    NASA Technical Reports Server (NTRS)

    Hall, D. K.; Foster, J. L.; Salomonson, V. V.; Klein, A. G.; Chien, J. Y. L.

    1998-01-01

    Following the launch of the Earth Observing System first morning (EOS-AM1) satellite, daily, global snow-cover mapping will be performed automatically at a spatial resolution of 500 m, cloud-cover permitting, using Moderate Resolution Imaging Spectroradiometer (MODIS) data. A technique to calculate theoretical accuracy of the MODIS-derived snow maps is presented. Field studies demonstrate that under cloud-free conditions when snow cover is complete, snow-mapping errors are small (less than 1%) in all land covers studied except forests where errors are greater and more variable. The theoretical accuracy of MODIS snow-cover maps is largely determined by percent forest cover north of the snowline. Using the 17-class International Geosphere-Biosphere Program (IGBP) land-cover maps of North America and Eurasia, the Northern Hemisphere is classified into seven land-cover classes and water. Snow-mapping errors estimated for each of the seven land-cover classes are extrapolated to the entire Northern Hemisphere for areas north of the average continental snowline for each month. Average monthly errors for the Northern Hemisphere are expected to range from 5 - 10%, and the theoretical accuracy of the future global snow-cover maps is 92% or higher. Error estimates will be refined after the first full year that MODIS data are available.

  6. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  7. Crowdsourcing for error detection in cortical surface delineations.

    PubMed

    Ganz, Melanie; Kondermann, Daniel; Andrulis, Jonas; Knudsen, Gitte Moos; Maier-Hein, Lena

    2017-01-01

    With the recent trend toward big data analysis, neuroimaging datasets have grown substantially in the past years. While larger datasets potentially offer important insights for medical research, one major bottleneck is the requirement for resources of medical experts needed to validate automatic processing results. To address this issue, the goal of this paper was to assess whether anonymous nonexperts from an online community can perform quality control of MR-based cortical surface delineations derived by an automatic algorithm. So-called knowledge workers from an online crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity of 82 % and a precision of 42 %. Merging multiple annotations per image significantly improves the sensitivity of the crowd (up to 95 %), but leads to a decrease in precision (as low as 22 %). Our experiments show that the detection of errors in automatic cortical surface delineations generated by anonymous untrained workers is feasible. Future work will focus on increasing the sensitivity of our method further, such that the error detection tasks can be handled exclusively by the crowd and expert resources can be focused on error correction.

  8. Evaluating the design of satellite scanning radiometers for earth radiation budget measurements with system simulations. Part 1: Instantaneous estimates

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Ardanuy, Philip; Hucek, Richard; Abel, Peter; Jacobowitz, Herbert

    1991-01-01

    A set of system simulations was performed to evaluate candidate scanner configurations to fly as a part of the Earth Radiation Budget Instrument (ERBI) on the polar platforms during the 1990's. The simulation is considered of instantaneous sampling (without diurnal averaging) of the longwave and shortwave fluxes at the top of the atmosphere (TOA). After measurement and subsequent inversion to the TOA, the measured fluxes were compared to the reference fluxes for 2.5 deg lat/long resolution targets. The reference fluxes at this resolution are obtained by integrating over the 25 x 25 = 625 grid elements in each target. The differences between each of these two resultant spatially averaged sets of target measurements (errors) are taken and then statistically summarized. Five instruments are considered: (1) the Conically Scanning Radiometer (CSR); (2) the ERBE Cross Track Scanner; (3) the Nimbus-7 Biaxial Scanner; (4) the Clouds and Earth's Radiant Energy System Instrument (CERES-1); and (5) the Active Cavity Array (ACA). Identical studies of instantaneous error were completed for many days, two seasons, and several satellite equator crossing longitudes. The longwave flux errors were found to have the same space and time characteristics as for the shortwave fluxes, but the errors are only about 25 pct. of the shortwave errors.

  9. Efficient detection of dangling pointer error for C/C++ programs

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  10. [Study on the appropriate parameters of automatic full crown tooth preparation for dental tooth preparation robot].

    PubMed

    Yuan, F S; Wang, Y; Zhang, Y P; Sun, Y C; Wang, D X; Lyu, P J

    2017-05-09

    Objective: To further study the most suitable parameters for automatic full crown preparation using oral clinical micro robot. Its purpose is to improve the quality of automated tooth preparing for the system and to lay the foundation for clinical application. Methods: Twenty selected artificial resin teeth were used as sample teeth. The micro robot automatic tooth preparation system was used in dental clinic to control the picosecond laser beam to complete two dimensional cutting on the resin tooth sample according to the motion planning path. Using the laser scanning measuring microscope, each layer of cutting depth values was obtained and the average value was calculated. The monolayer cutting depth was determined. The three-dimensional (3D) data of the target resin teeth was obtained using internal scanner, and the CAD data of full-crown tooth preparation was designed by CAD self-develged software. According to the depth of the single layer, 11 complete resin teeth in phantom head were automatically prepared by the robot controlling the laser focused spot in accordance with the layer-cutting way. And the accuracy of resin tooth preparation was evaluated with the software. Using the same method, monolayer cutting depth parameter for cutting dental hard tissue was obtained. Then 15 extracted mandibular and maxillary first molars went through automatic full crown tooth preparation. And the 3D data of tooth preparations were obtained with intra oral scanner. The software was used to evaluate the accuracy of tooth preparation. Results: The results indicated that the single cutting depth of cutting resin teeth and in vitro teeth by picosecond laser were (60.0±2.6) and (45.0±3.6) μm, respectively. Using the tooth preparation robot, 11 artificial resin teeth and 15 complete natural teeth were automatically prepared, and the average time were (13.0±0.7), (17.0±1.8) min respectively. Through software evaluation, the average preparation depth of the occlusal surface of 11 resin teeth was approximately (2.089±0.026) mm, the error was about (0.089±0.026) mm; the average convergence angle was about 6.56°±0.30°, the error was about 0.56°±0.30°. Compared with the target preparation shape, the average shape error of the 11 resin tooth preparations was about 0.02-0.11 mm. And the average preparation depth of the occlusal surface of 15 natural teeth was approximately (2.097±0.022) mm, the error was about (0.097±0.022) mm; the average convergence angle was about 6.98°±0.35°, the error was about 0.98°±0.35°. Compared with the target preparation shape, the average shape error of the 15 natural tooth preparations was about 0.05-0.17 mm. Conclusions: The experimental results indicate that the automatic tooth preparation for resin teeth and the teeth were completed according to the specific parameters of the single cutting depth by the micro robot controlling picosecond laser respectively, its preparation accuracy met the clinical needs. And the suitability of the parameter was confirmed.

  11. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  12. Predictors of driving safety in early Alzheimer disease.

    PubMed

    Dawson, J D; Anderson, S W; Uc, E Y; Dastrup, E; Rizzo, M

    2009-02-10

    To measure the association of cognition, visual perception, and motor function with driving safety in Alzheimer disease (AD). Forty drivers with probable early AD (mean Mini-Mental State Examination score 26.5) and 115 elderly drivers without neurologic disease underwent a battery of cognitive, visual, and motor tests, and drove a standardized 35-mile route in urban and rural settings in an instrumented vehicle. A composite cognitive score (COGSTAT) was calculated for each subject based on eight neuropsychological tests. Driving safety errors were noted and classified by a driving expert based on video review. Drivers with AD committed an average of 42.0 safety errors/drive (SD = 12.8), compared to an average of 33.2 (SD = 12.2) for drivers without AD (p < 0.0001); the most common errors were lane violations. Increased age was predictive of errors, with a mean of 2.3 more errors per drive observed for each 5-year age increment. After adjustment for age and gender, COGSTAT was a significant predictor of safety errors in subjects with AD, with a 4.1 increase in safety errors observed for a 1 SD decrease in cognitive function. Significant increases in safety errors were also found in subjects with AD with poorer scores on Benton Visual Retention Test, Complex Figure Test-Copy, Trail Making Subtest-A, and the Functional Reach Test. Drivers with Alzheimer disease (AD) exhibit a range of performance on tests of cognition, vision, and motor skills. Since these tests provide additional predictive value of driving performance beyond diagnosis alone, clinicians may use these tests to help predict whether a patient with AD can safely operate a motor vehicle.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuangrod, T; Simpson, J; Greer, P

    Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less

  14. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  15. Density Functional Theory Calculation of pKa's of Thiols in Aqueous Solution Using Explicit Water Molecules and the Polarizable Continuum Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2016-07-21

    The pKa's of substituted thiols are important for understanding their properties and reactivities in applications in chemistry, biochemistry, and material chemistry. For a collection of 175 different density functionals and the SMD implicit solvation model, the average errors in the calculated pKa's of methanethiol and ethanethiol are almost 10 pKa units higher than for imidazole. A test set of 45 substituted thiols with pKa's ranging from 4 to 12 has been used to assess the performance of 8 functionals with 3 different basis sets. As expected, the basis set needs to include polarization functions on the hydrogens and diffuse functions on the heavy atoms. Solvent cavity scaling was ineffective in correcting the errors in the calculated pKa's. Inclusion of an explicit water molecule that is hydrogen bonded with the H of the thiol group (in neutral) or S(-) (in thiolates) lowers error by an average of 3.5 pKa units. With one explicit water and the SMD solvation model, pKa's calculated with the M06-2X, PBEPBE, BP86, and LC-BLYP functionals are found to deviate from the experimental values by about 1.5-2.0 pKa units whereas pKa's with the B3LYP, ωB97XD and PBEVWN5 functionals are still in error by more than 3 pKa units. The inclusion of three explicit water molecules lowers the calculated pKa further by about 4.5 pKa units. With the B3LYP and ωB97XD functionals, the calculated pKa's are within one unit of the experimental values whereas most other functionals used in this study underestimate the pKa's. This study shows that the ωB97XD functional with the 6-31+G(d,p) and 6-311++G(d,p) basis sets, and the SMD solvation model with three explicit water molecules hydrogen bonded to the sulfur produces the best result for the test set (average error -0.11 ± 0.50 and +0.15 ± 0.58, respectively). The B3LYP functional also performs well (average error -1.11 ± 0.82 and -0.78 ± 0.79, respectively).

  16. Error assessment in molecular dynamics trajectories using computed NMR chemical shifts.

    PubMed

    Koes, David R; Vries, John K

    2017-01-01

    Accurate chemical shifts for the atoms in molecular mechanics (MD) trajectories can be obtained from quantum mechanical (QM) calculations that depend solely on the coordinates of the atoms in the localized regions surrounding atoms of interest. If these coordinates are correct and the sample size is adequate, the ensemble average of these chemical shifts should be equal to the chemical shifts obtained from NMR spectroscopy. If this is not the case, the coordinates must be incorrect. We have utilized this fact to quantify the errors associated with the backbone atoms in MD simulations of proteins. A library of regional conformers containing 169,499 members was constructed from 6 model proteins. The chemical shifts associated with the backbone atoms in each of these conformers was obtained from QM calculations using density functional theory at the B3LYP level with a 6-311+G(2d,p) basis set. Chemical shifts were assigned to each backbone atom in each MD simulation frame using a template matching approach. The ensemble average of these chemical shifts was compared to chemical shifts from NMR spectroscopy. A large systematic error was identified that affected the 1 H atoms of the peptide bonds involved in hydrogen bonding with water molecules or peptide backbone atoms. This error was highly sensitive to changes in electrostatic parameters. Smaller errors affecting the 13 C a and 15 N atoms were also detected. We believe these errors could be useful as metrics for comparing the force-fields and parameter sets used in MD simulation because they are directly tied to errors in atomic coordinates.

  17. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  18. Estimation of lower flammability limits of C-H compounds in air at atmospheric pressure, evaluation of temperature dependence and diluent effect.

    PubMed

    Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R

    2015-03-21

    Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana

    USGS Publications Warehouse

    Stewart, J.A.; Miller, R.L.; Butch, G.K.

    1986-01-01

    Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)

  20. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A.

    2017-11-01

    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2cm and 0.3cm, respectively, by excluding the final part of the left wing).

  1. Performance analysis of decode-and-forward dual-hop optical spatial modulation with diversity combiner over atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Odeyemi, Kehinde O.; Owolawi, Pius A.; Srivastava, Viranjay M.

    2017-11-01

    Dual-hops transmission is a growing interest technique that can be used to mitigate against atmospheric turbulence along the Free Space Optical (FSO) communication links. This paper analyzes the performance of Decode-and-Forward (DF) dual-hops FSO systems in-conjunction with spatial modulation and diversity combiners over a Gamma-Gamma atmospheric turbulence channel using heterodyne detection. Maximum Ratio Combiner (MRC), Equal Gain Combiner (EGC) and Selection Combiner (SC) are considered at the relay and destination as mitigation tools to improve the system error performance. Power series expansion of modified Bessel function is used to derive the closed form expression for the end-to-end Average Pairwise Error Probability (APEP) expressions for each of the combiners under study and a tight upper bound on the Average Bit Error Rate (ABER) per hop is given. Thus, the overall end-to-end ABER for the dual-hops FSO system is then evaluated. The numerical results depicted that dual-hops transmission systems outperformed the direct link systems. Moreover, the impact of having the same and different combiners at the relay and destination are also presented. The results also confirm that the combination of dual hops transmission with spatial modulation and diversity combiner significantly improves the systems error rate with the MRC combiner offering an optimal performance with respect to variation in atmospheric turbulence, change in links average received SNR and link range of the system.

  2. Adinkra (in)equivalence from Coxeter group representations: A case study

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; Gates, S. James; Hübsch, T.

    2014-02-01

    Using a MathematicaTM code, we present a straightforward numerical analysis of the 384-dimensional solution space of signed permutation 4×4 matrices, which in sets of four, provide representations of the 𝒢ℛ(4, 4) algebra, closely related to the 𝒩 = 1 (simple) supersymmetry algebra in four-dimensional space-time. Following after ideas discussed in previous papers about automorphisms and classification of adinkras and corresponding supermultiplets, we make a new and alternative proposal to use equivalence classes of the (unsigned) permutation group S4 to define distinct representations of higher-dimensional spin bundles within the context of adinkras. For this purpose, the definition of a dual operator akin to the well-known Hodge star is found to partition the space of these 𝒢ℛ(4, 4) representations into three suggestive classes.

  3. Automated flare forecasting using a statistical learning technique

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Shih, Frank Y.; Jing, Ju; Wang, Hai-Min

    2010-08-01

    We present a new method for automatically forecasting the occurrence of solar flares based on photospheric magnetic measurements. The method is a cascading combination of an ordinal logistic regression model and a support vector machine classifier. The predictive variables are three photospheric magnetic parameters, i.e., the total unsigned magnetic flux, length of the strong-gradient magnetic polarity inversion line, and total magnetic energy dissipation. The output is true or false for the occurrence of a certain level of flares within 24 hours. Experimental results, from a sample of 230 active regions between 1996 and 2005, show the accuracies of a 24-hour flare forecast to be 0.86, 0.72, 0.65 and 0.84 respectively for the four different levels. Comparison shows an improvement in the accuracy of X-class flare forecasting.

  4. Spelling in adolescents with dyslexia: errors and modes of assessment.

    PubMed

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.

  5. Errors in Multi-Digit Arithmetic and Behavioral Inattention in Children With Math Difficulties

    PubMed Central

    Raghubar, Kimberly; Cirino, Paul; Barnes, Marcia; Ewing-Cobbs, Linda; Fletcher, Jack; Fuchs, Lynn

    2009-01-01

    Errors in written multi-digit computation were investigated in children with math difficulties. Third-and fourth-grade children (n = 291) with coexisting math and reading difficulties, math difficulties, reading difficulties, or no learning difficulties were compared. A second analysis compared those with severe math learning difficulties, low average achievement in math, and no learning difficulties. Math fact errors were related to the severity of the math difficulties, not to reading status. Contrary to predictions, children with poorer reading, regardless of math achievement, committed more visually based errors. Operation switch errors were not systematically related to group membership. Teacher ratings of behavioral inattention were related to accuracy, math fact errors, and procedural bugs. The findings are discussed with respect to hypotheses about the cognitive origins of arithmetic errors and in relation to current discussions about how to conceptualize math disabilities. PMID:19380494

  6. 40 CFR 1045.730 - What ABT reports must I send to EPA?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... volumes for the model year with a point of retail sale in the United States, as described in § 1045.701(j...) Show that your net balance of emission credits from all your participating families in each averaging... errors mistakenly decreased your balance of emission credits, you may correct the errors and recalculate...

  7. 40 CFR 1051.730 - What ABT reports must I send to EPA?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the model year with a point of retail sale in the United States, as described in § 1051.701(d). For... following additional information: (1) Show that your net balance of emission credits in each averaging set... that errors mistakenly decreased your balance of emission credits, you may correct the errors and...

  8. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  9. Automatic Detection of Preposition Errors in Learner Writing

    ERIC Educational Resources Information Center

    De Felice, Rachele; Pulman, Stephen

    2009-01-01

    In this article, we present an approach to the automatic correction of preposition errors in L2 English. Our system, based on a maximum entropy classifier, achieves average precision of 42% and recall of 35% on this task. The discussion of results obtained on correct and incorrect data aims to establish what characteristics of L2 writing prove…

  10. Land use surveys by means of automatic interpretation of LANDSAT system data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.

    1981-01-01

    Analyses for seven land-use classes are presented. The classes are: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation, and natural vegetation. The automatic classification of LANDSAT MSS data using a maximum likelihood algorithm shows a 39% average error of emission and a 3.45 error of commission for the seven classes.

  11. The effect of timing errors in optical digital systems.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1972-01-01

    The use of digital transmission with narrow light pulses appears attractive for data communications, but carries with it a stringent requirement on system bit timing. The effects of imperfect timing in direct-detection (noncoherent) optical binary systems are investigated using both pulse-position modulation and on-off keying for bit transmission. Particular emphasis is placed on specification of timing accuracy and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors from which average error probabilities can be computed for specific synchronization methods. Of significance is the presence of a residual or irreducible error probability in both systems, due entirely to the timing system, which cannot be overcome by the data channel.

  12. Characterization of domain-peptide interaction interface: a case study on the amphiphysin-1 SH3 domain.

    PubMed

    Hou, Tingjun; Zhang, Wei; Case, David A; Wang, Wei

    2008-02-29

    Many important protein-protein interactions are mediated by peptide recognition modular domains, such as the Src homology 3 (SH3), SH2, PDZ, and WW domains. Characterizing the interaction interface of domain-peptide complexes and predicting binding specificity for modular domains are critical for deciphering protein-protein interaction networks. Here, we propose the use of an energetic decomposition analysis to characterize domain-peptide interactions and the molecular interaction energy components (MIECs), including van der Waals, electrostatic, and desolvation energy between residue pairs on the binding interface. We show a proof-of-concept study on the amphiphysin-1 SH3 domain interacting with its peptide ligands. The structures of the human amphiphysin-1 SH3 domain complexed with 884 peptides were first modeled using virtual mutagenesis and optimized by molecular mechanics (MM) minimization. Next, the MIECs between domain and peptide residues were computed using the MM/generalized Born decomposition analysis. We conducted two types of statistical analyses on the MIECs to demonstrate their usefulness for predicting binding affinities of peptides and for classifying peptides into binder and non-binder categories. First, combining partial least squares analysis and genetic algorithm, we fitted linear regression models between the MIECs and the peptide binding affinities on the training data set. These models were then used to predict binding affinities for peptides in the test data set; the predicted values have a correlation coefficient of 0.81 and an unsigned mean error of 0.39 compared with the experimentally measured ones. The partial least squares-genetic algorithm analysis on the MIECs revealed the critical interactions for the binding specificity of the amphiphysin-1 SH3 domain. Next, a support vector machine (SVM) was employed to build classification models based on the MIECs of peptides in the training set. A rigorous training-validation procedure was used to assess the performances of different kernel functions in SVM and different combinations of the MIECs. The best SVM classifier gave satisfactory predictions for the test set, indicated by average prediction accuracy rates of 78% and 91% for the binding and non-binding peptides, respectively. We also showed that the performance of our approach on both binding affinity prediction and binder/non-binder classification was superior to the performances of the conventional MM/Poisson-Boltzmann solvent-accessible surface area and MM/generalized Born solvent-accessible surface area calculations. Our study demonstrates that the analysis of the MIECs between peptides and the SH3 domain can successfully characterize the binding interface, and it provides a framework to derive integrated prediction models for different domain-peptide systems.

  13. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  14. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  15. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  16. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  17. Quality assurance of dynamic parameters in volumetric modulated arc therapy

    PubMed Central

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-01-01

    Objectives The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy® S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Methods Three tests (for gantry position–dose delivery synchronisation, gantry speed–dose delivery synchronisation and MLC leaf speed and positions) were performed. Results The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the “beginning” and “end” errors. For MLC position verification, the maximum error was −2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. Conclusion This experiment demonstrates that the variables and parameters of the Synergy® S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC. PMID:22745206

  18. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less

  19. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  20. Entangled quantum key distribution over two free-space optical links.

    PubMed

    Erven, C; Couteau, C; Laflamme, R; Weihs, G

    2008-10-13

    We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.

  1. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  2. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  3. The input ambiguity hypothesis and case blindness: an account of cross-linguistic and intra-linguistic differences in case errors.

    PubMed

    Pelham, Sabra D

    2011-03-01

    English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.

  4. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  5. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  6. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  7. Improvement of GPS radio occultation retrieval error of E region electron density: COSMIC measurement and IRI model simulation

    NASA Astrophysics Data System (ADS)

    Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang

    2015-03-01

    In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a range of -6.5% to 20%.

  8. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    PubMed

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Climate model biases in seasonality of continental water storage revealed by satellite gravimetry

    USGS Publications Warehouse

    Swenson, Sean; Milly, P.C.D.

    2006-01-01

    Satellite gravimetric observations of monthly changes in continental water storage are compared with outputs from five climate models. All models qualitatively reproduce the global pattern of annual storage amplitude, and the seasonal cycle of global average storage is reproduced well, consistent with earlier studies. However, global average agreements mask systematic model biases in low latitudes. Seasonal extrema of low‐latitude, hemispheric storage generally occur too early in the models, and model‐specific errors in amplitude of the low‐latitude annual variations are substantial. These errors are potentially explicable in terms of neglected or suboptimally parameterized water stores in the land models and precipitation biases in the climate models.

  10. A model-data comparison of gross primary productivity: Results from the North American Carbon Program site synthesis

    Treesearch

    Kevin Schaefer; Christopher R. Schwalm; Chris Williams; M. Altaf Arain; Alan Barr; Jing M. Chen; Kenneth J. Davis; Dimitre Dimitrov; Timothy W. Hilton; David Y. Hollinger; Elyn Humphreys; Benjamin Poulter; Brett M. Raczka; Andrew D. Richardson; Alok Sahoo; Peter Thornton; Rodrigo Vargas; Hans Verbeeck; Ryan Anderson; Ian Baker; T. Andrew Black; Paul Bolstad; Jiquan Chen; Peter S. Curtis; Ankur R. Desai; Michael Dietze; Danilo Dragoni; Christopher Gough; Robert F. Grant; Lianhong Gu; Atul Jain; Chris Kucharik; Beverly Law; Shuguang Liu; Erandathie Lokipitiya; Hank A. Margolis; Roser Matamala; J. Harry McCaughey; Russ Monson; J. William Munger; Walter Oechel; Changhui Peng; David T. Price; Dan Ricciuto; William J. Riley; Nigel Roulet; Hanqin Tian; Christina Tonitto; Margaret Torn; Ensheng Weng; Xiaolu Zhou

    2012-01-01

    Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States...

  11. How Achievement Error Patterns of Students with Mild Intellectual Disability Differ from Low IQ and Low Achievement Students without Diagnoses

    ERIC Educational Resources Information Center

    Root, Melissa M.; Marchis, Lavinia; White, Erica; Courville, Troy; Choi, Dowon; Bray, Melissa A.; Pan, Xingyu; Wayte, Jessica

    2017-01-01

    This study investigated the differences in error factor scores on the Kaufman Test of Educational Achievement-Third Edition between individuals with mild intellectual disabilities (Mild IDs), those with low achievement scores but average intelligence, and those with low intelligence but without a Mild ID diagnosis. The two control groups were…

  12. Preschool Speech Error Patterns Predict Articulation and Phonological Awareness Outcomes in Children with Histories of Speech Sound Disorders

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2013-01-01

    Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…

  13. Estimating the densities of benzene-derived explosives using atomic volumes.

    PubMed

    Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka

    2018-02-09

    The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.

  14. Assessment of averaging spatially correlated noise for 3-D radial imaging.

    PubMed

    Stobbe, Robert W; Beaulieu, Christian

    2011-07-01

    Any measurement of signal intensity obtained from an image will be corrupted by noise. If the measurement is from one voxel, an error bound associated with noise can be assigned if the standard deviation of noise in the image is known. If voxels are averaged together within a region of interest (ROI) and the image noise is uncorrelated, the error bound associated with noise will be reduced in proportion to the square root of the number of voxels in the ROI. However, when 3-D-radial images are created the image noise will be spatially correlated. In this paper, an equation is derived and verified with simulated noise for the computation of noise averaging when image noise is correlated, facilitating the assessment of noise characteristics for different 3-D-radial imaging methodologies. It is already known that if the radial evolution of projections are altered such that constant sampling density is produced in k-space, the signal-to-noise ratio (SNR) inefficiency of standard radial imaging (SR) can effectively be eliminated (assuming a uniform transfer function is desired). However, it is shown in this paper that the low-frequency noise power reduction of SR will produce beneficial (anti-) correlation of noise and enhanced noise averaging characteristics. If an ROI contains only one voxel a radial evolution altered uniform k-space sampling technique such as twisted projection imaging (TPI) will produce an error bound ~35% less with respect to noise than SR, however, for an ROI containing 16 voxels the SR methodology will facilitate an error bound ~20% less than TPI. If a filtering transfer function is desired, it is shown that designing sampling density to create the filter shape has both SNR and noise correlation advantages over sampling k-space uniformly. In this context SR is also beneficial. Two sets of 48 images produced from a saline phantom with sodium MRI at 4.7T are used to experimentally measure noise averaging characteristics of radial imaging and good agreement with theory is obtained.

  15. SU-E-T-318: The Effect of Patient Positioning Errors On Target Coverage and Cochlear Dose in Stereotactic Radiosurgery Treatment of Acoustic Neuromas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellamonica, D.; Luo, G.; Ding, G.

    Purpose: Setup errors on the order of millimeters may cause under-dosing of targets and significant changes in dose to critical structures especially when planning with tight margins in stereotactic radiosurgery. This study evaluates the effects of these types of patient positioning uncertainties on planning target volume (PTV) coverage and cochlear dose for stereotactic treatments of acoustic neuromas. Methods: Twelve acoustic neuroma patient treatment plans were retrospectively evaluated in Brainlab iPlan RT Dose 4.1.3. All treatment beams were shaped by HDMLC from a Varian TX machine. Seven patients had planning margins of 2mm, five had 1–1.5mm. Six treatment plans were createdmore » for each patient simulating a 1mm setup error in six possible directions: anterior-posterior, lateral, and superiorinferior. The arcs and HDMLC shapes were kept the same for each plan. Change in PTV coverage and mean dose to the cochlea was evaluated for each plan. Results: The average change in PTV coverage for the 72 simulated plans was −1.7% (range: −5 to +1.1%). The largest average change in coverage was observed for shifts in the patient's superior direction (−2.9%). The change in mean cochlear dose was highly dependent upon the direction of the shift. Shifts in the anterior and superior direction resulted in an average increase in dose of 13.5 and 3.8%, respectively, while shifts in the posterior and inferior direction resulted in an average decrease in dose of 17.9 and 10.2%. The average change in dose to the cochlea was 13.9% (range: 1.4 to 48.6%). No difference was observed based on the size of the planning margin. Conclusion: This study indicates that if the positioning uncertainty is kept within 1mm the setup errors may not result in significant under-dosing of the acoustic neuroma target volumes. However, the change in mean cochlear dose is highly dependent upon the direction of the shift.« less

  16. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  17. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  18. Interactions between moist heating and dynamics in atmospheric predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straus, D.M.; Huntley, M.A.

    1994-02-01

    The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less

  19. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  20. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  1. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning

    PubMed Central

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-01-01

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744

  2. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning.

    PubMed

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-04-07

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.

  3. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  4. Predicting alpine headwater stream intermittency: a case study in the northern Rocky Mountains

    USGS Publications Warehouse

    Sando, Thomas R.; Blasch, Kyle W.

    2015-01-01

    This investigation used climatic, geological, and environmental data coupled with observational stream intermittency data to predict alpine headwater stream intermittency. Prediction was made using a random forest classification model. Results showed that the most important variables in the prediction model were snowpack persistence, represented by average snow extent from March through July, mean annual mean monthly minimum temperature, and surface geology types. For stream catchments with intermittent headwater streams, snowpack, on average, persisted until early June, whereas for stream catchments with perennial headwater streams, snowpack, on average, persisted until early July. Additionally, on average, stream catchments with intermittent headwater streams were about 0.7 °C warmer than stream catchments with perennial headwater streams. Finally, headwater stream catchments primarily underlain by coarse, permeable sediment are significantly more likely to have intermittent headwater streams than those primarily underlain by impermeable bedrock. Comparison of the predicted streamflow classification with observed stream status indicated a four percent classification error for first-order streams and a 21 percent classification error for all stream orders in the study area.

  5. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    NASA Astrophysics Data System (ADS)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  6. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  7. Emergency Department Visit Forecasting and Dynamic Nursing Staff Allocation Using Machine Learning Techniques With Readily Available Open-Source Software.

    PubMed

    Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel

    2015-08-01

    Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.

  8. Effectiveness of Specimen Collection Technology in the Reduction of Collection Turnaround Time and Mislabeled Specimens in Emergency, Medical-Surgical, Critical Care, and Maternal Child Health Departments.

    PubMed

    Saathoff, April M; MacDonald, Ryan; Krenzischek, Erundina

    2018-03-01

    The objective of this study was to evaluate the impact of specimen collection technology implementation featuring computerized provider order entry, positive patient identification, bedside specimen label printing, and barcode scanning on the reduction of mislabeled specimens and collection turnaround times in the emergency, medical-surgical, critical care, and maternal child health departments at a community teaching hospital. A quantitative analysis of a nonrandomized, pre-post intervention study design evaluated the statistical significance of reduction of mislabeled specimen percentages and collection turnaround times affected by the implementation of specimen collection technology. Mislabeled specimen percentages in all areas decreased from an average of 0.020% preimplementation to an average of 0.003% postimplementation, with a P < .001. Collection turnaround times longer than 60 minutes decreased after the implementation of specimen collection technology by an average of 27%, with a P < .001. Specimen collection and identification errors are a significant problem in healthcare, contributing to incorrect diagnoses, delayed care, lack of essential treatments, and patient injury or death. Collection errors can also contribute to an increased length of stay, increased healthcare costs, and decreased patient satisfaction. Specimen collection technology has structures in place to prevent collection errors and improve the overall efficiency of the specimen collection process.

  9. Cost effectiveness of the stream-gaging program in Ohio

    USGS Publications Warehouse

    Shindel, H.L.; Bartlett, W.P.

    1986-01-01

    This report documents the results of the cost effectiveness of the stream-gaging program in Ohio. Data uses and funding sources were identified for 107 continuous stream gages currently being operated by the U.S. Geological Survey in Ohio with a budget of $682,000; this budget includes field work for other projects and excludes stations jointly operated with the Miami Conservancy District. No stream gage were identified as having insufficient reason to continue their operation; nor were any station identified as having uses specifically only for short-term studies. All 107 station should be maintained in the program for the foreseeable future. The average standard error of estimation of stream flow records is 29.2 percent at its present level of funding. A minimum budget of $679,000 is required to operate the 107-gage program; a budget less than this does no permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 31.1 percent The maximum budget analyzed was $1,282,000, which resulted in an average standard error of 11.1 percent. A need for additional gages has been identified by the other agencies that cooperate in the program. It is suggested that these gage be installed as funds can be made available.

  10. Average BER and outage probability of the ground-to-train OWC link in turbulence with rain

    NASA Astrophysics Data System (ADS)

    Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da

    2017-09-01

    The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.

  11. Quantification of evaporation induced error in atom probe tomography using molecular dynamics simulation.

    PubMed

    Chen, Shu Jian; Yao, Xupei; Zheng, Changxi; Duan, Wen Hui

    2017-11-01

    Non-equilibrium molecular dynamics was used to simulate the dynamics of atoms at the atom probe surface and five objective functions were used to quantify errors. The results suggested that before ionization, thermal vibration and collision caused the atoms to displace up to 1Å and 25Å respectively. The average atom displacements were found to vary between 0.2 and 0.5Å. About 9 to 17% of the atoms were affected by collision. Due to the effects of collision and ion-ion repulsion, the back-calculated positions were on average 0.3-0.5Å different from the pre-ionized positions of the atoms when the number of ions generated per pulse was minimal. This difference could increase up to 8-10Å when 1.5ion/nm 2 were evaporated per pulse. On the basis of the results, surface ion density was considered an important factor that needed to be controlled to minimize error in the evaporation process. Copyright © 2017. Published by Elsevier B.V.

  12. High-speed receiver based on waveguide germanium photodetector wire-bonded to 90nm SOI CMOS amplifier.

    PubMed

    Pan, Huapu; Assefa, Solomon; Green, William M J; Kuchta, Daniel M; Schow, Clint L; Rylyakov, Alexander V; Lee, Benjamin G; Baks, Christian W; Shank, Steven M; Vlasov, Yurii A

    2012-07-30

    The performance of a receiver based on a CMOS amplifier circuit designed with 90nm ground rules wire-bonded to a waveguide germanium photodetector is characterized at data rates up to 40Gbps. Both chips were fabricated through the IBM Silicon CMOS Integrated Nanophotonics process on specialty photonics-enabled SOI wafers. At the data rate of 28Gbps which is relevant to the new generation of optical interconnects, a sensitivity of -7.3dBm average optical power is demonstrated with 3.4pJ/bit power-efficiency and 0.6UI horizontal eye opening at a bit-error-rate of 10(-12). The receiver operates error-free (bit-error-rate < 10(-12)) up to 40Gbps with optimized power supply settings demonstrating an energy efficiency of 1.4pJ/bit and 4pJ/bit at data rates of 32Gbps and 40Gbps, respectively, with an average optical power of -0.8dBm.

  13. Energy-efficient quantum computing

    NASA Astrophysics Data System (ADS)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  14. Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.

    We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less

  15. Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations

    DOE PAGES

    Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.; ...

    2017-04-27

    We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less

  16. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  17. Development of multiple-eye PIV using mirror array

    NASA Astrophysics Data System (ADS)

    Maekawa, Akiyoshi; Sakakibara, Jun

    2018-06-01

    In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of  ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .

  18. Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems. Ph.D. Thesis, 1992

    NASA Technical Reports Server (NTRS)

    Long, Junsheng

    1994-01-01

    This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.

  19. Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.

    2003-01-01

    Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.

  20. Performance of the Keck Observatory adaptive-optics system.

    PubMed

    van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A

    2004-10-10

    The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.

  1. Short-range optical air data measurements for aircraft control using rotational Raman backscatter.

    PubMed

    Fraczek, Michael; Behrendt, Andreas; Schmitt, Nikolaus

    2013-07-15

    A first laboratory prototype of a novel concept for a short-range optical air data system for aircraft control and safety was built. The measurement methodology was introduced in [Appl. Opt. 51, 148 (2012)] and is based on techniques known from lidar detecting elastic and Raman backscatter from air. A wide range of flight-critical parameters, such as air temperature, molecular number density and pressure can be measured as well as data on atmospheric particles and humidity can be collected. In this paper, the experimental measurement performance achieved with the first laboratory prototype using 532 nm laser radiation of a pulse energy of 118 mJ is presented. Systematic measurement errors and statistical measurement uncertainties are quantified separately. The typical systematic temperature, density and pressure measurement errors obtained from the mean of 1000 averaged signal pulses are small amounting to < 0.22 K, < 0.36% and < 0.31%, respectively, for measurements at air pressures varying from 200 hPa to 950 hPa but constant air temperature of 298.95 K. The systematic measurement errors at air temperatures varying from 238 K to 308 K but constant air pressure of 946 hPa are even smaller and < 0.05 K, < 0.07% and < 0.06%, respectively. A focus is put on the system performance at different virtual flight altitudes as a function of the laser pulse energy. The virtual flight altitudes are precisely generated with a custom-made atmospheric simulation chamber system. In this context, minimum laser pulse energies and pulse numbers are experimentally determined, which are required using the measurement system, in order to meet measurement error demands for temperature and pressure specified in aviation standards. The aviation error margins limit the allowable temperature errors to 1.5 K for all measurement altitudes and the pressure errors to 0.1% for 0 m and 0.5% for 13000 m. With regard to 100-pulse-averaged temperature measurements, the pulse energy using 532 nm laser radiation has to be larger than 11 mJ (35 mJ), regarding 1-σ (3-σ) uncertainties at all measurement altitudes. For 100-pulse-averaged pressure measurements, the laser pulse energy has to be larger than 95 mJ (355 mJ), respectively. Based on these experimental results, the laser pulse energy requirements are extrapolated to the ultraviolet wavelength region as well, resulting in significantly lower pulse energy demand of 1.5 - 3 mJ (4-10 mJ) and 12-27 mJ (45-110 mJ) for 1-σ (3-σ) 100-pulse-averaged temperature and pressure measurements, respectively.

  2. Investigation of interfractional shape variations based on statistical point distribution model for prostate cancer radiation therapy.

    PubMed

    Shibayama, Yusuke; Arimura, Hidetaka; Hirose, Taka-Aki; Nakamoto, Takahiro; Sasaki, Tomonari; Ohga, Saiji; Matsushita, Norimasa; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Honda, Hiroshi

    2017-05-01

    The setup errors and organ motion errors pertaining to clinical target volume (CTV) have been considered as two major causes of uncertainties in the determination of the CTV-to-planning target volume (PTV) margins for prostate cancer radiation treatment planning. We based our study on the assumption that interfractional target shape variations are not negligible as another source of uncertainty for the determination of precise CTV-to-PTV margins. Thus, we investigated the interfractional shape variations of CTVs based on a point distribution model (PDM) for prostate cancer radiation therapy. To quantitate the shape variations of CTVs, the PDM was applied for the contours of 4 types of CTV regions (low-risk, intermediate- risk, high-risk CTVs, and prostate plus entire seminal vesicles), which were delineated by considering prostate cancer risk groups on planning computed tomography (CT) and cone beam CT (CBCT) images of 73 fractions of 10 patients. The standard deviations (SDs) of the interfractional random errors for shape variations were obtained from covariance matrices based on the PDMs, which were generated from vertices of triangulated CTV surfaces. The correspondences between CTV surface vertices were determined based on a thin-plate spline robust point matching algorithm. The systematic error for shape variations was defined as the average deviation between surfaces of an average CTV and planning CTVs, and the random error as the average deviation of CTV surface vertices for fractions from an average CTV surface. The means of the SDs of the systematic errors for the four types of CTVs ranged from 1.0 to 2.0 mm along the anterior direction, 1.2 to 2.6 mm along the posterior direction, 1.0 to 2.5 mm along the superior direction, 0.9 to 1.9 mm along the inferior direction, 0.9 to 2.6 mm along the right direction, and 1.0 to 3.0 mm along the left direction. Concerning the random errors, the means of the SDs ranged from 0.9 to 1.2 mm along the anterior direction, 1.0 to 1.4 mm along the posterior direction, 0.9 to 1.3 mm along the superior direction, 0.8 to 1.0 mm along the inferior direction, 0.8 to 0.9 mm along the right direction, and 0.8 to 1.0 mm along the left direction. Since the shape variations were not negligible for intermediate and high-risk CTVs, they should be taken into account for the determination of the CTV-to-PTV margins in radiation treatment planning of prostate cancer. © 2017 American Association of Physicists in Medicine.

  3. Modeling Coronal Response in Decaying Active Regions with Magnetic Flux Transport and Steady Heating

    NASA Astrophysics Data System (ADS)

    Ugarte-Urra, Ignacio; Warren, Harry P.; Upton, Lisa A.; Young, Peter R.

    2017-09-01

    We present new measurements of the dependence of the extreme ultraviolet (EUV) radiance on the total magnetic flux in active regions as obtained from the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Using observations of nine active regions tracked along different stages of evolution, we extend the known radiance—magnetic flux power-law relationship (I\\propto {{{Φ }}}α ) to the AIA 335 Å passband, and the Fe xviii 93.93 Å spectral line in the 94 Å passband. We find that the total unsigned magnetic flux divided by the polarity separation ({{Φ }}/D) is a better indicator of radiance for the Fe xviii line with a slope of α =3.22+/- 0.03. We then use these results to test our current understanding of magnetic flux evolution and coronal heating. We use magnetograms from the simulated decay of these active regions produced by the Advective Flux Transport model as boundary conditions for potential extrapolations of the magnetic field in the corona. We then model the hydrodynamics of each individual field line with the Enthalpy-based Thermal Evolution of Loops model with steady heating scaled as the ratio of the average field strength and the length (\\bar{B}/L) and render the Fe xviii and 335 Å emission. We find that steady heating is able to partially reproduce the magnitudes and slopes of the EUV radiance—magnetic flux relationships and discuss how impulsive heating can help reconcile the discrepancies. This study demonstrates that combined models of magnetic flux transport, magnetic topology, and heating can yield realistic estimates for the decay of active region radiances with time.

  4. Temporal evolution of solar wind ion composition and their source coronal holes during the declining phase of cycle 23. I. Low-latitude extension of polar coronal holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Yuan-Kuen; Wang, Yi-Ming; Muglach, Karin

    2014-06-01

    We analyzed 27 solar wind (SW) intervals during the declining phase of cycle 23, whose source coronal holes (CHs) can be unambiguously identified and are associated with one of the polar CHs. We found that the SW ions have a temporal trend of decreasing ionization state, and such a trend is different between the slow and fast SW. The photospheric magnetic field, both inside and at the outside boundary of the CH, also exhibits a trend of decrease with time. However, EUV line emissions from different layers of the atmosphere exhibit different temporal trends. The coronal emission inside the CHmore » generally increases toward the CH boundary as the underlying field increases in strength and becomes less unipolar. In contrast, this relationship is not seen in the coronal emission averaged over the entire CH. For C and O SW ions that freeze-in at lower altitude, stronger correlation between their ionization states and field strength (both signed and unsigned) appears in the slow SW, while for Fe ions that freeze-in at higher altitude, stronger correlation appears in the fast SW. Such correlations are seen both inside the CH and at its boundary region. On the other hand, the coronal electron temperature correlates well with the SW ion composition only in the boundary region. Our analyses, although not able to determine the likely footpoint locations of the SW of different speeds, raise many outstanding questions for how the SW is heated and accelerated in response to the long-term evolution of the solar magnetic field.« less

  5. Is the straddle effect in contrast perception limited to second-order spatial vision?

    PubMed Central

    Graham, Norma V.; Wolfson, S. Sabina

    2018-01-01

    Previous work on the straddle effect in contrast perception (Foley, 2011; Graham & Wolfson, 2007; Wolfson & Graham, 2007, 2009) has used visual patterns and observer tasks of the type known as spatially second-order. After adaptation of about 1 s to a grid of Gabor patches all at one contrast, a second-order test pattern composed of two different test contrasts can be easy or difficult to perceive correctly. When the two test contrasts are both a bit less (or both a bit greater) than the adapt contrast, observers perform very well. However, when the two test contrasts straddle the adapt contrast (i.e., one of the test contrasts is greater than the adapt contrast and the other is less), performance drops dramatically. To explain this drop in performance—the straddle effect—we have suggested a contrast-comparison process. We began to wonder: Are second-order patterns necessary for the straddle effect? Here we show that the answer is “no”. We demonstrate the straddle effect using spatially first-order visual patterns and several different observer tasks. We also see the effect of contrast normalization using first-order visual patterns here, analogous to our prior findings with second-order visual patterns. We did find one difference between first- and second-order tasks: Performance in the first-order tasks was slightly lower. This slightly lower performance may be due to slightly greater memory load. For many visual scenes, the important quantity in human contrast processing may not be monotonic with physical contrast but may be something more like the unsigned difference between current contrast and recent average contrast. PMID:29904790

  6. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Viswanathan, A; Cormack, R

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less

  7. A Novel Application of Machine Learning Methods to Model Microcontroller Upset Due to Intentional Electromagnetic Interference

    NASA Astrophysics Data System (ADS)

    Bilalic, Rusmir

    A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.

  8. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  9. Quantification of errors induced by temporal resolution on Lagrangian particles in an eddy-resolving model

    NASA Astrophysics Data System (ADS)

    Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander

    2014-04-01

    Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.

  10. Successful application of the DBLOC method to the hydroxylation of camphor by cytochrome p450

    PubMed Central

    Jerome, Steven V.; Hughes, Thomas F.

    2015-01-01

    Abstract The activation barrier for the hydroxylation of camphor by cytochrome P450 was computed using a mixed quantum mechanics/molecular mechanics (QM/MM) model of the full protein‐ligand system and a fully QM calculation using a cluster model of the active site at the B3LYP/LACVP*/LACV3P** level of theory, which consisted of B3LYP/LACV3P** single point energies computed at B3LYP/LACVP* optimized geometries. From the QM/MM calculation, a barrier height of 17.5 kcal/mol was obtained, while the experimental value was known to be less than or equal to 10 kcal/mol. This process was repeated using the D3 correction for hybrid DFT in order to investigate whether the inadequate treatment of dispersion interaction was responsible for the overestimation of the barrier. While the D3 correction does reduce the computed barrier to 13.3 kcal/mol, it was still in disagreement with experiment. After application of a series of transition metal optimized localized orbital corrections (DBLOC) and without any refitting of parameters, the barrier was further reduced to 10.0 kcal/mol, which was consistent with the experimental results. The DBLOC method to C—H bond activation in methane monooxygenase (MMO) was also applied, as a second, independent test. The barrier in MMO was known, by experiment, to be 15.4 kcal/mol.1 After application of the DBLOC corrections to the MMO barrier compute by B3LYP, in a previous study, and accounting for dispersion with Grimme's D3 method, the unsigned deviation from experiment was improved from 3.2 to 2.3 kcal/mol. These results suggested that the combination of dispersion plus localized orbital corrections could yield significant quantitative improvements in modeling the catalytic chemistry of transition‐metal containing enzymes, within the limitations of the statistical errors of the model, which appear to be on the order of approximately 2 kcal/mole. PMID:26441133

  11. Reaction of SO2 with OH in the atmosphere.

    PubMed

    Long, Bo; Bao, Junwei Lucas; Truhlar, Donald G

    2017-03-15

    The OH + SO 2 reaction plays a critical role in understanding the oxidation of SO 2 in the atmosphere, and its rate constant is critical for clarifying the fate of SO 2 in the atmosphere. The rate constant of the OH + SO 2 reaction is calculated here by using beyond-CCSDT correlation energy calculations for a benchmark, validated density functional methods for direct dynamics, canonical variational transition state theory with anharmonicity and multidimensional tunneling for the high-pressure rate constant, and system-specific quantum RRK theory for pressure effects; the combination of these methods can compete in accuracy with experiments. There has been a long-term debate in the literature about whether the OH + SO 2 reaction is barrierless, but our calculations indicate a positive barrier with an transition structure that has an enthalpy of activation of 0.27 kcal mol -1 at 0 K. Our results show that the high-pressure limiting rate constant of the OH + SO 2 reaction has a positive temperature dependence, but the rate constant at low pressures has a negative temperature dependence. The computed high-pressure limiting rate constant at 298 K is 1.25 × 10 -12 cm 3 molecule -1 s -1 , which agrees excellently with the value (1.3 × 10 -12 cm 3 molecule -1 s -1 ) recommended in the most recent comprehensive evaluation for atmospheric chemistry. We show that the atmospheric lifetime of SO 2 with respect to oxidation by OH depends strongly on altitude (in the range 0-50 km) due to the falloff effect. We introduce a new interpolation procedure for fitting the combined temperature and pressure dependence of the rate constant, and it fits the calculated rate constants over the whole range with a mean unsigned error of only 7%. The present results provide reliable kinetics data for this specific reaction, and also they demonstrate convenient theoretical methods that can be reliable for predicting rate constants of other gas-phase reactions.

  12. Evaluation of B3LYP, X3LYP, and M06-Class Density Functionals for Predicting the Binding Energies of Neutral, Protonated, and Deprotonated Water Clusters.

    PubMed

    Bryantsev, Vyacheslav S; Diallo, Mamadou S; van Duin, Adri C T; Goddard, William A

    2009-04-14

    In this paper we assess the accuracy of the B3LYP, X3LYP, and newly developed M06-L, M06-2X, and M06 functionals to predict the binding energies of neutral and charged water clusters including (H2O)n, n = 2-8, 20), H3O(+)(H2O)n, n = 1-6, and OH(-)(H2O)n, n = 1-6. We also compare the predicted energies of two ion hydration and neutralization reactions on the basis of the calculated binding energies. In all cases, we use as benchmarks calculated binding energies of water clusters extrapolated to the complete basis set limit of the second-order Møller-Plesset perturbation theory with the effects of higher order correlation estimated at the coupled-cluster theory with single, double, and perturbative triple excitations in the aug-cc-pVDZ basis set. We rank the accuracy of the functionals on the basis of the mean unsigned error (MUE) between calculated benchmark and density functional theory energies. The corresponding MUE (kcal/mol) for each functional is listed in parentheses. We find that M06-L (0.73) and M06 (0.84) give the most accurate binding energies using very extended basis sets such as aug-cc-pV5Z. For more affordable basis sets, the best methods for predicting the binding energies of water clusters are M06-L/aug-cc-pVTZ (1.24), B3LYP/6-311++G(2d,2p) (1.29), and M06/aug-cc-PVTZ (1.33). M06-L/aug-cc-pVTZ also gives more accurate energies for the neutralization reactions (1.38), whereas B3LYP/6-311++G(2d,2p) gives more accurate energies for the ion hydration reactions (1.69).

  13. New Parameters for Higher Accuracy in the Computation of Binding Free Energy Differences upon Alanine Scanning Mutagenesis on Protein-Protein Interfaces.

    PubMed

    Simões, Inês C M; Costa, Inês P D; Coimbra, João T S; Ramos, Maria J; Fernandes, Pedro A

    2017-01-23

    Knowing how proteins make stable complexes enables the development of inhibitors to preclude protein-protein (P:P) binding. The identification of the specific interfacial residues that mostly contribute to protein binding, denominated as hot spots, is thus critical. Here, we refine an in silico alanine scanning mutagenesis protocol, based on a residue-dependent dielectric constant version of the Molecular Mechanics/Poisson-Boltzmann Surface Area method. We have used a large data set of structurally diverse P:P complexes to redefine the residue-dependent dielectric constants used in the determination of binding free energies. The accuracy of the method was validated through comparison with experimental data, considering the per-residue P:P binding free energy (ΔΔG binding ) differences upon alanine mutation. Different protocols were tested, i.e., a geometry optimization protocol and three molecular dynamics (MD) protocols: (1) one using explicit water molecules, (2) another with an implicit solvation model, and (3) a third where we have carried out an accelerated MD with explicit water molecules. Using a set of protein dielectric constants (within the range from 1 to 20) we showed that the dielectric constants of 7 for nonpolar and polar residues and 11 for charged residues (and histidine) provide optimal ΔΔG binding predictions. An overall mean unsigned error (MUE) of 1.4 kcal mol -1 relative to the experiment was achieved in 210 mutations only with geometry optimization, which was further reduced with MD simulations (MUE of 1.1 kcal mol -1 for the MD employing explicit solvent). This recalibrated method allows for a better computational identification of hot spots, avoiding expensive and time-consuming experiments or thermodynamic integration/ free energy perturbation/ uBAR calculations, and will hopefully help new drug discovery campaigns in their quest of searching spots of interest for binding small drug-like molecules at P:P interfaces.

  14. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  15. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  16. How many drinks did you have on September 11, 2001?

    PubMed

    Perrine, M W Bud; Schroder, Kerstin E E

    2005-07-01

    This study tested the predictability of error in retrospective self-reports of alcohol consumption on September 11, 2001, among 80 Vermont light, medium and heavy drinkers. Subjects were 52 men and 28 women participating in daily self-reports of alcohol consumption for a total of 2 years, collected via interactive voice response technology (IVR). In addition, retrospective self-reports of alcohol consumption on September 11, 2001, were collected by telephone interview 4-5 days following the terrorist attacks. Retrospective error was calculated as the difference between the IVR self-report of drinking behavior on September 11 and the retrospective self-report collected by telephone interview. Retrospective error was analyzed as a function of gender and baseline drinking behavior during the 365 days preceding September 11, 2001 (termed "the baseline"). The intraclass correlation (ICC) between daily IVR and retrospective self-reports of alcohol consumption on September 11 was .80. Women provided, on average, more accurate self-reports (ICC = .96) than men (ICC = .72) but displayed more underreporting bias in retrospective responses. Amount and individual variability of alcohol consumption during the 1-year baseline explained, on average, 11% of the variance in overreporting (r = .33), 9% of the variance in underreporting (r = .30) and 25% of the variance in the overall magnitude of error (r = .50), with correlations up to .62 (r2 = .38). The size and direction of error were clearly predictable from the amount and variation in drinking behavior during the 1-year baseline period. The results demonstrate the utility and detail of information that can be derived from daily IVR self-reports in the analysis of retrospective error.

  17. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  18. The Accuracy of Two-Way Satellite Time Transfer Calibrations

    DTIC Science & Technology

    2005-01-01

    20392, USA Abstract Results from successive calibrations of Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) operational equipment at...USNO and five remote stations using portable TWSTFT equipment are analyzed for internal and external errors, finding an average random error of ±0.35...most accurate means of operational long-distance time transfer are Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) and carrier-phase GPS

  19. On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels

    DTIC Science & Technology

    2013-12-01

    Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks

  20. Hartman Testing of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

Top